Multi-Strategy-Driven Salp Swarm Algorithm for Global Optimization

Abstract

In response to the shortcomings of the Salp Swarm Algorithm (SSA) such as low convergence accuracy and slow convergence speed, a Multi-Strategy-Driven Salp Swarm Algorithm (MSD-SSA) was proposed. First, food sources or random leaders were associated with the current bottle sea squirt at the beginning of the iteration, to which Levy flight random walk and crossover operators with small probability were added to improve the global search and ability to jump out of local optimum. Secondly, the position mean of the leader was used to establish a link with the followers, which effectively avoided the blind following of the followers and greatly improved the convergence speed of the algorithm. Finally, Brownian motion stochastic steps were introduced to improve the convergence accuracy of populations near food sources. The improved method switched under changes in the adaptive parameters, balancing the exploration and development of SSA. In the simulation experiments, the performance of the algorithm was examined using SSA and MSD-SSA on the commonly used CEC benchmark test functions and CEC2017-constrained optimization problems, and the effectiveness of MSD-SSA was verified by solving three real engineering problems. The results showed that MSD-SSA improved the convergence speed and convergence accuracy of the algorithm, and achieved good results in practical engineering problems.

Share and Cite:

Gao, Z. and Wang, B. (2023) Multi-Strategy-Driven Salp Swarm Algorithm for Global Optimization. Journal of Computer and Communications, 11, 88-117. doi: 10.4236/jcc.2023.117007.

1. Introduction

With the continuous development of human cognition and society, the complexity of various application problems and scientific computations has increased, and the own drawbacks of traditional optimization computation methods have revealed themselves to be unable to meet people’s needs in a reasonable time. In recent years, meta-heuristic algorithms have received much attention and have been applied to many fields because of their operational flexibility, ease of implementation and gradient-free mechanism. Among them, the evolutionary algorithm includes Genetic Algorithm (GA) [1] , Differential Evolution (DE) [2] , etc. The swarm intelligence techniques mainly include Particle Swarm Optimization (PSO) [3] , Ant Colony Optimization (ACO) [4] , Artificial Bee Colony Algorithm (ABC) [5] , Cuckoo Search (CS) [6] , etc. The recently proposed swarm intelligence algorithm includes Polar Bear Optimization (PBO) [7] , Grey Wolf Algorithm (GWO) [8] , Whale Optimization Algorithm (WOA) [9] , Salp Swarm Algorithm (SSA) [10] , etc. The Salp Swarm Algorithm was proposed by Mirjalili et al. The algorithm has adaptive parameters that facilitate both global searches in the early stages of the algorithm and local development in the later stages of the algorithm. Similar to other swarm intelligence algorithms, it suffers from the disadvantages of being prone to local optimality and low convergence accuracy in late iterations.

Since the Salp Swarm Algorithm was proposed in 2017, many scholars at home and abroad have improved the performance of the Salp Swarm Algorithm by addressing its shortcomings. Thawkar [11] proposed a hybrid model using Teaching-Learning-Based Optimization and Salp Swarm Algorithm (TLBO-SSA), and applied it to the diagnosis of breast cancer, and achieved good results. Neggz et al. [12] introduced the sine cosine algorithm into the Salp Swarm Algorithm to obtain an enhanced algorithm (ISSAFD), which improved the exploration effect in the global search stage, enhanced the diversity of the population, avoided falling into local optimization, and balanced the exploration and development of the algorithm. Zhang et al. [13] proposed a multi-strategy Enhanced Salp Swarm Algorithm (ESSA), which uses orthogonal learning to generate opposite solutions, expands the diversity of the population, uses quadratic interpolation to improve the local search ability of the algorithm, and uses test functions to test the accuracy of the improved algorithm. Heidari et al. [14] proposed the Chaotic Salp Swarm Algorithm (CDESSA), chaotic initialization was introduced in the initial stage of the algorithm to expand the diversity of the population, and differential evolution was used to prevent premature convergence. Chen et al. [15] used a weighted center of gravity for the leader and adaptive weight for followers, balancing the development and exploration of the algorithm. Yang et al. [16] proposed a multi-strategy fusion Salp Swarm Algorithm (ISSA), which uses the mid-vertical theorem to change the position update mode of the follower and introduces the perturbation mechanism of the mid-vertical convergence strategy to improve the ability of the algorithm to jump out of the local optimum. Liu et al. [17] proposed a Differential Evolution Parasitic Salp Swarm Algorithm (PDESSA), which introduced the position information of the previous generation of leaders, strengthened the global search, introduced adaptive inertia weight, balanced the exploration and exploitation of the algorithm, and finally introduced a dual-population mechanism with evolution and parasitism strategies, which increased the diversity of the population and improved the ability of the algorithm to jump out of local extremum. Zhang et al. [18] introduced the Levy flight strategy into the algorithm, and updated the follower by comparing the fitness value of the follower, which enhanced the global search ability and convergence speed of the algorithm. Xie et al. [19] proposed a New Salp Swarm Algorithm (NSSA), introduced the idea of following the head wolf in the gray wolf optimization algorithm to the update method of leader, and carried out experiments in 23 benchmark test functions to apply it to image matching.

All of the above-improved versions improve the performance of the original algorithm in finding the optimum to a certain extent. In order to make the algorithm capable of solving more complex optimization problems, it still needs to be improved. In this paper, we propose a Multi-Strategy-Driven Salp Swarm Algorithm (MSD-SSA). Firstly, in the first half of the algorithm, the leader’s position is updated using Levy flight with small probability, Levy flight is a random wandering with larger probability of large span steps, which can expand the global search range and jump out of local extremes more effectively. Secondly, a crossover operation is performed on the leader and the better leader is retained based on a greedy strategy to improve the convergence speed of the algorithm. Again, in the second half of the algorithm, we introduce Brownian motion random step is introduced in the second half of the algorithm, and Brownian motion is used to search deeply in the latest iteration to improve the convergence accuracy of the algorithm. Finally, the follower update method is different from the original algorithm, and the average position of the leader is used to update the followers, which strengthens the connection between the followers and the leader and makes the convergence speed faster. The performance of MSD-SSA is verified by experimenting with the commonly used benchmark test functions and CEC2017 [20] test set to solve the optimal values and three engineering problems, and comparing with the traditional Salp Swarm Algorithm.

2. Salp Swarm Algorithm (SSA)

The inspiration for the Salp Swarm Algorithm [10] comes from the group movement and predatory behavior of the marine organism salp. The salp is a marine creature with a translucent barrel-shaped body that resembles a jellyfish. Based on the behavior of this creature, the algorithm for SSA was simulated. In SSA, the problem to be solved is to find the largest food source. The fitness function of the algorithm depends on the quality of the food source.

Depending on the position of the individuals in the chain structure, all individuals can be divided into two categories: leaders and followers. The leader guides all individuals to form a herd chain and move towards the food source to find a better food source for the population, and the followers follow the previous individual and are indirectly guided by the leader. The mathematical model of these two stages is as follows:

In the D-dimensional search space, N individuals of salp are randomly generated. The population of salp is shown in the N × D matrix in Equation (1)

X = [ X 1 X 2 X N ] = [ x 1 1 x 2 1 x D 1 x 1 2 x 2 2 x D 2 x 1 N x 2 N x D N ] . (1)

Among them, X shows the population of salp, X i shows the i-th individual in the population, and x j i shows the j-th attribute of the i-th individual in the population.

The position update of leader is related to the current global optimal food source location, so after generating the population, find the current global optimal food source location F. In the population, leader has leadership functions, and their positions are updated according to Equation (2)

x j i = { F j + c 1 ( r 1 ( u b j l b j ) + l b j ) , r 2 1 2 F j c 1 ( r 1 ( u b j l b j ) + l b j ) , r 2 < 1 2 (2)

where, x j i shows the new position of the i-th leader in the j-th dimension, F j is the j-th dimension of the best food source, u b j and l b j show the upper and lower bounds of the individual in the j-dimensional space, respectively, r 1 and r 2 are random numbers subject to uniform distribution on [ 0 , 1 ] . The coefficient c1 is an important convergence factor of SSA.

c1 is mainly responsible for the exploration and development of the algorithm, which is calculated by Equation (3)

c 1 = 2 exp ( ( 4 t T max ) 2 ) , (3)

where, t is the current number of iterations, and T max is the maximum number of iterations. The value of c1 gradually decreases from 2, and its variation curve is shown in Figure 1. In the early stage of algorithm iteration, c1 changes quickly and has a large step size, which is conducive to global search. At the later stage of the iteration, c1 tends to be stable and the step size is small, which is conducive to fine search and converges to the global best food source.

Followers use Newton’s law of motion to update the position, give the initial velocity, acceleration and initial position to the follower, and obtain the position update formula as shown in Equation (4)

x j i = v 0 t + 1 2 a t 2 , (4)

Assuming its initial velocity v 0 = 0 , a = ( v final v 0 ) / t , Equation (5) is the position update of followers.

Figure 1. Change curve of c1 iteration for 500 times.

x j i ( t ) = 1 2 ( x j i 1 ( t 1 ) + x j i ( t 1 ) ) , (5)

where, i 2 . x j i shows the new position updated by the i-th follower on the j-th dimension. According to Equation (5), the position update of the follower is related to the position of the previous individual.

The process of SSA is as follows:

Step 1. Set parameters such as population size, maximum number of iterations, upper and lower boundaries.

Step 2. Find individual fitness value and get food source, leader and followers.

Step 3. Judge whether the maximum iteration is reached. If so, output the optimal value position and optimal value, otherwise proceed to the next step.

Step 4. Equation (2) updates leader position and amend boundary, Equation (5) updates follower position and amend boundary.

Step 5. Calculate the fitness value of the individual after the location update.

Step 6. Increase the number of iterations by 1, return to Step 3.

3. Multi-Strategy-Driven Salp Swarm Algorithm (MSD-SSA)

3.1. Random Step of Levy Flight and Brownian Motion

In the standard SSA, the leader only searches according to the location of the food source, and the update mode of the follower is only related to its adjacent Levy flight [21] , proposed by French Mathematician Paul Pierre Lévy [21] , is a heavy-tailed distribution with a high probability of producing a large stride. Most animals follow this rule in their foraging behavior. The accidental long-distance step of Levy flight helps to expand the search range and jump out of the limit of local optimization, more short distance steps can carefully search the surrounding area and improve the local exploitation ability. The trajectory of Levy’s flight is shown in Figure 2.

The probability density function of Levy flight obeys the Levy distribution, which can be expressed by Equation (6)

Levy ( s ) = 1 π 0 exp ( β | k | λ ) cos ( k s ) d k . (6)

According to Mantegna method [22] , the random step of Levy flight can be expressed as follows

l = u | v | 1 / β . (7)

where, u and v are Gaussian distributions that satisfy the following conditions

u ~ N ( 0 , δ u 2 ) , v ~ N ( 0 , δ v 2 ) , (8)

δ u = ( Γ ( 1 + β ) sin ( β 2 π ) Γ ( 1 + β 2 ) β 2 β 1 2 ) 1 / β , (9)

δ v = 1 , (10)

Figure 2. Movement trajectory of Levy flight (a random walk graph generated by Levy flight random step within a certain range and random direction walking).

the value of β is generally 1.5, and Γ is a Gamma function.

Brownian motion is the irregular motion made by a suspended particle, a random wandering process without rules. Because the standard Brownian motion step change image is similar to a Gaussian distribution with mean 0 and variance 1, the Brownian motion random number is generated from Equation (11)

Brown ~ N ( 0, δ 2 ) . (11)

3.2. Crossover Operator

The crossover operator helped to increase the diversity of the population, thereby increasing the chance for the population to break out of local optimum food sources.

Different random integers were generated, each corresponding to the respective dimension of an individual, sorted based on their fitness values, and the positions of the corresponding dimensions were exchanged in sequence. The adjacent individuals underwent the crossover operator to generate the crossover individuals, and a greedy strategy was used to retain individuals with better fitness values for the next iteration. c random numbers satisfy

c = k D , 0 < s i D , s = { s 1 , s 2 , , s c } , (12)

where, the random integers in s are different, and k is the proportion coefficient of the crossover operator.

3.3. Mean Position of the Leader

In standard SSA, the position update of a follower is only related to the previous individual, which is blind and slows down the convergence of the overall algorithm. If the previous individual falls into a local optimum, then all the followers are unable to jump out of the local optimum.

Consider the position relationship between the follower and the average leader, and let the follower follow the average leader closely, with the following update method

x ¯ j = i = 0 N l x j i N l , (13)

where, x ¯ j is the average position of all leaders in the j-th dimension, and N l is the number of leaders.

3.4. Improving Salp Swarm Algorithm (ISSA)

In the standard SSA, the leader only searches according to the location of the food source, and the update mode of the follower is only related to its adjacent individual location. The global search or local search is determined according to c1 value. If the food source is the local optimum, the leader will fall into it, and the follower will not jump out of the local optimum, so that the SSA cannot jump out of the local optimum. Therefore, in the early stage of the iteration, consider the location of random leader or food source to establish contact with the current individual to update the leader, and a small probability of Levy flight [21] random step size should be added to help jump out of the local optimal during the iteration process.

A random leader or food source establishes relationship with the current individual location to obtain the location update formula of leader

x j i ( t ) = { x j i ( t 1 ) + c 1 ( F j x j i ( t 1 ) ) , r 2 1 2 x j i ( t 1 ) + c 1 ( x j i ( t 1 ) x j r a n d ) , r 2 < 1 2 (14)

among them, x j r a n d is the position of the randomly selected leader in the j-th dimension. In this way, the influence of the food source on the current individual and the influence of other individuals in the population are considered, the exploration and exploitation process of the algorithm is also considered.

Adding Levy flight random number with small probability is beneficial to better jump out of local optimum. When P < P 0 , calculate the Levy flight [21] random number through Equation (7) to Equation (10), and obtain the leader position update formula to add Levy flight random number

x j i ( t ) = { ( x j i ( t 1 ) + c 1 ( F j x j i ( t 1 ) ) ) l , r 2 1 2 ( x j i ( t 1 ) + c 1 ( x j i ( t 1 ) x j r a n d ) ) l , r 2 < 1 2 (15)

where, P 0 is the probability of occurrence, l is the Levy flight [21] random number.

Later, in the iteration, as the value of c1 continues to decrease, SSA tends to fine tune the search, and to improve the search accuracy, a random step of Brownian motion is used. Equation (11) is the position update method of leader adopting Brownian motion

x j i = { ( F j + c 1 ( r 1 ( u b j l b j ) + l b j ) ) Brown , r 2 1 2 ( F j c 1 ( r 1 ( u b j l b j ) + l b j ) ) Brown , r 2 < 1 2 (16)

where, Brown is the random step of Brownian motion.

The crossover operator can generate new individuals, expand the diversity of the population, improve the search efficiency of the population, and better jump out of the local optimum. Generate random dimensions according to Equation (12), and apply the crossover operator to the leader.

According to Equation (13), the position updating method for the follower is derived.

x j i ( t ) = 1 2 ( x ¯ j + x j i ( t 1 ) ) , (17)

where, x ¯ j is the mean position of the leaders calculated by Equation (13).

Followers are led purposefully, and the population is able to move faster towards the optimal food source, allowing the algorithm to converge more quickly.

Each iteration of the algorithm uses a greedy strategy to select the optimal fitness-valued individuals.

The flowchart of MSD-SSA is shown in Figure 3.

Figure 3. The flowchart of MSD-SSA.

The pseudo code of the MSD-SSA is described as follows:

3.5. Time Complexity Analysis

Time complexity is an important indicator of the amount of work required to run the algorithm and to evaluate the time consumption of the algorithm. The time complexity is usually denoted by O. The MSD-SSA algorithm consists of three parts: population initialization, updating the leader position and updating the follower position. Assume that the iteration number of the algorithm is T, the population size is N, and the dimension is D.

1) Initialize the population, which needs to be run ND time.

2) To calculate the individual fitness value and select the best individual as food, it needs to run N ( N 1 ) 2 times.

3) The algorithm parameters are updated and need to be run 4 times.

4) Leaders need to run D times to update in space.

5) Followers update ( N 1 ) D times in space.

6) Levy flight update needs to run 1 2 N D times.

7) Brownian motion update needs to run 1 2 N D times.

8) It takes ND time to find out the food source from the population and export it.

Each of the above operation units goes through T iteration, so the total time complexity of MSD-SSA is Equation (18)

O ( MSD-SSA ) = T [ N D + N ( N 1 ) 2 + 4 + D + ( N 1 ) D ] . (18)

4. Simulation Experiments and Result Discussion

To verify the performance of MSD-SSA, 10 commonly used benchmark test functions [23] and some CEC2017 constraint planning problems [20] , as well as three real engineering problems, were selected for experiments, mainly to verify the merit-seeking capability and the convergence speed of MSD-SSA.

The operating device is the Windows 10 operating system, the CPU is Intel Core i5-1135G7@2.4 GHz and 16 G running memory. The editing language is python, and the experimental platform is PyCharm 2022.3.2.

4.1. Benchmarking Functions

The 10 benchmarks test functions [23] are shown in Table 1. The benchmark test functions used are all 30-dimensional, the population of the algorithm has 30 individuals and the algorithm is iterated 500 times. Comparison is made using (Wolf Pack Algorithm) WPA, SSA and MSD-SSA. The parameter settings for WPA are α = 4 , β = 6 , ω = 30 . The number of leaders and followers is half of the population respectively, the parameter value in MSD-SSA is P 0 = 0.2 , k = 0.2 . The test functions in Table 1 were solved using WPA, SSA and MSD-SSA respectively, and the python was used to make 30 independent experiments and compare the mean, standard deviation and convergence speed (time) of the three algorithms to avoid the bias caused by the randomness of the algorithms and to ensure the reasonableness of the algorithms. The experimental results are shown in Table 2 and the resulting convergence curves are plotted in Figure 4.

4.2. CEC2017 Constrained Optimization Problems

The CEC2017 test functions [20] are shown in Table 3. The test functions are in 10 and 30 dimensions, respectively. The population of the algorithm has 100 individuals, the number of leaders and followers is half of the population respectively, and the algorithm is iterated 1000 times independently. The parameter value in MSD-SSA is P 0 = 0.2 , k = 0.2 . The test functions in Table 3 were solved using MSD-SSA and SSA respectively, and the python was used to make 30

Table 1. Description of benchmark function.

Table 2. Performance comparison of WPA, SSA and MSD-SSA on test functions.

independent experiments and compare the mean, standard deviation and convergence speed (time) of the two algorithms to avoid the bias caused by the randomness of the algorithms and to ensure the reasonableness of the algorithms. Table 4 and Table 5 show the experimental results in 10 and 30 dimensions, respectively, and the obtained convergence curves are shown in Figure 5 and Figure 6, respectively.

4.3. Result Discussion

For benchmarks test functions, the experimental results in Table 2 show that MSD-SSA has significant advantages over SSA in optimizing most functions. In terms of convergence precision, the convergence accuracy of MSD-SSA for f 1 , f 2 , f 3 , f 4 , f 6 , f 7 , f 8 , f 9 is significantly better than that of SSA. For f 5 and f 10 , the final convergence is not optimal, but the convergence results are greatly improved compared with SSA. In terms of algorithm robustness, except f 8 , the stability is better than SSA. Although the stability of f 8 is slightly weaker than

Figure 4. Corresponds to the convergence curve of test function f1 - f10 respectively.

(a) (b)

Table 3. CEC2017 description of benchmark function.

Table 4. Performance comparison of MSD-SSA and SSA on test functions (10D).

Table 5. Performance comparison of MSD-SSA and SSA on test functions (30D).

SSA, its standard deviation also has reaches 10−7. In terms of convergence speed, MSD-SSA converges to the optimal point faster than SSA.

For CEC2017 test functions, the CEC2017 test functions [20] include single and multi-modal problems, which can effectively evaluate the performance of algorithms. Table 4 tests 16 CEC2017 test functions [20] with 10-dimensions. The experimental results show that under the same environment, MSD-SSA has

Figure 5. CEC2017 test function iteration curve (10D).

Figure 6. CEC2017 test function iteration curve (30D).

a better convergence effect and faster convergence speed than SSA. For most CEC2017 test functions, MSD-SSA is more stable. However, Table 4 shows that the standard deviation of CEC10 and CEC12 is too large, indicating that the robustness of the improved algorithm is poor for CEC10 and CEC12.

Table 5 tests 16 CEC2017 test functions [20] with 10 dimensions. Compared with the CEC2017 10-dimensionals test function, the 30-dimensionals test function is more complex. According to Table 5, MSD-SSA is superior to SSA in terms of convergence accuracy, manipulation speed and robustness. However, the standard deviations of CEC10, CEC11, CEC12, CEC15 and CEC16 are too large, indicating that the robustness of MSD-SSA is poor and needs to be improved.

In summary, the improved algorithm has a stronger search capability and faster convergence to the target value for most functions.

5. Engineering Problems and Result Discussion

5.1. Cantilever Beam Design Problem

In the era of big data, solving the constrained optimization problem is crucial to engineering. Although benchmark functional testing was addressed in the previous section, the issues in actual projects are within specific limitations. Optimize practical engineering problems using SSA and MSD-SSA, and verify the feasibility of MSD-SSA for practical engineering problems.

The goal of CBD [24] is to minimize the weight of cantilever beam and square section, see Figure 7. The variables of this problem are composed of five hollow square members. The CBD problem [24] is expressed as follows:

min f ( x ) = 0.0624 ( x 1 + x 2 + x 3 + x 4 + x 5 ) s .t . g ( x ) = 61 x 1 3 + 27 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0 0.01 x 1 , x 2 , x 3 , x 4 , x 5 100 (19)

Do 30 independent experiments to verify the effectiveness of the algorithm. See Table 6 for the experimental results.

Figure 7. Cantilever beam structure [25] .

Table 6. Comparison results of MSD-SSA with SSA for CBD problem [24] .

5.2. Tension/Compression Spring Design Problem

The purpose of TCSD [26] is to minimize the weight of the spring. This problem has three variables, namely wire diameter, mean coil diameter, and the number of dynamic coils. Equation (20) describes the TCSD [26] problem.

Do 30 independent experiments to verify the effectiveness of the algorithm. See Table 7 for the experimental results.

min f ( x ) = ( x 3 + 2 ) x 2 x 1 2 s .t . g 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4 0 g 2 ( x ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5180 x 1 2 0 g 3 ( x ) = 1 140.45 x 1 x 2 3 x 3 0 g 4 ( x ) = x 1 + x 2 1.5 1 0 0.05 x 1 2 , 0.25 x 2 1.3 , 2 x 3 15 (20)

5.3. Schematic Views of Speed Reducer Design

The goal is to minimize the weight of a speed reducer so that the engine and propeller can rotate efficiently. This problem involves constraints on stresses in the shafts, transverse deflection of the shafts, surface stress and bending stress of the gear teeth (see Figure 8). Equation (21) describes the SRD problem [27] .

Do 30 independent experiments to verify the effectiveness of the algorithm. See Table 8 for the experimental results.

Table 7. Comparison results of MSD-SSA with SSA for TCSD problem [26] .

Figure 8. Schematic views of speed reducer design [27] .

min f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 3 ) s .t . g 1 ( x ) = 27 x 1 x 2 2 x 3 1 0 , g 2 ( x ) = 397.5 x 1 x 2 2 x 3 2 1 0 , g 3 ( x ) = 1.93 x 4 3 x 2 x 6 4 x 3 1 0 , g 4 ( x ) = 1.93 x 5 3 x 2 x 7 4 x 3 1 0 , g 5 ( x ) = [ ( 745 x 4 / x 2 x 3 ) 2 + 16.9 × 10 6 ] 1 / 2 110 x 6 3 1 0 ,

g 6 ( x ) = [ ( 745 x 5 / x 2 x 3 ) 2 + 157.5 × 10 6 ] 1 / 2 85 x 7 3 1 0 , g 7 ( x ) = x 2 x 3 40 1 0 , g 8 ( x ) = 5 x 2 x 1 1 0 , g 9 ( x ) = x 1 12 x 2 1 0 , g 10 ( x ) = 1.5 x 6 + 1.9 x 4 1 0 , g 11 ( x ) = 1.1 x 7 + 1.9 x 5 1 0 2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.3 x 5 8.3 , 2.9 x 6 3.9 , 5.0 x 7 5.5 (21)

5.4. Result Discussion

For the CBD problem [24] , the optimization results of MSD-SSA and SSA are 1.313731 and 1.339961, the optimal solution is shown in Table 6.

For the TCSD problem [26] , the optimization results of MSD-SSA and SSA are 0.0126917 and 0.0128271, the optimal solution is shown in Table 7.

For the SRD problem [27] , the optimization results of MSD-SSA and SSA are 2997.0889 and 3041.6166, the optimal solution is shown in Table 8.

The average operation time(s) obtained from the above practical engineering problems are as follows:

It can be seen from Table 9 that MSD-SSA takes less time to solve the same

Table 8. Comparison results of MSD-SSA with SSA for SRD problem.

Table 9. Time spent by MSD-SSA and SSA to solve practical problems.

practical engineering problems than SSA, which shows that the improved algorithm is superior to the original algorithm in terms of convergence speed.

6. Conclusions and Implications

Based on the drawbacks of low convergence accuracy, slow convergence speed and easy to fall into a local optimum of the Salp Swarm Algorithm, this paper proposes a Multi-Strategy-Driven Salp Swarm Algorithm. Firstly, the Levy flight strategy with small probability is introduced to the leaders and the crossover operator operation is adopted to improve the global exploration ability in the early stage of the algorithm, so that the leaders can effectively jump out of the trap of local optimum. Secondly, Brownian motion is adopted in the latest iteration to improve the exploitation ability of the algorithm near the global optimum and improve the convergence accuracy of the algorithm. Finally, the positional relationship between the leaders is used to update the followers, so that the leaders have the purpose. The final update of the followers with the positional relationship between leaders, so that the leaders purposefully follow the better individual, improves the convergence speed of the algorithm. The above improvements optimize the global and local search of the algorithm, balancing the exploration and exploitation functions of the algorithm. In the simulation experiments, benchmark test functions and three real engineering problems were used for comparison tests to examine the performance of the algorithm. The experimental results show that the improved algorithm has higher global convergence and faster convergence than the original algorithm, and also outperforms the original algorithm in terms of algorithm robustness. The next steps will continue to improve the optimal performance of the tunicate algorithm, and increase its optimization accuracy, convergence speed and convergence stability. Based on this, it will be applied to solve more practical problems.

SSA can be applied in many different fields, such as optimization design, machine learning, and so on. Specifically, MSD-SSA can be applied to problems that require finding the optimal solution, such as optimizing design variables to achieve the best performance in engineering, or adjusting model parameters to obtain the best prediction results in machine learning. The optimization capability of this algorithm can improve the convergence speed and accuracy of the algorithm, and it has certain advantages for solving complex and high-dimensional problems. In general, the application scenarios of MSD-SSA are relatively broad, and they can provide assistance in solving problems in multiple fields.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Holland, J.H. (1992) Genetic Algorithms. Scientific American, 267, 66-73.
https://doi.org/10.1038/scientificamerican0792-66
[2] Das, S. and Suganthan, P.N. (2010) Differential Evolution: A Survey of the State-of-the-Art. IEEE Transactions on Evolutionary Computation, 15, 4-31.
https://doi.org/10.1109/TEVC.2010.2059031
[3] Kennedy, J. and Eberhart, R. (1995) Particle Swarm Optimization. Proceedings of ICNN’95—International Conference on Neural Networks, Perth, 27 November-December 1995, 1942-1948.
https://doi.org/10.1109/ICNN.1995.488968
[4] Dorigo, M., Birattari, M. and Stutzle, T. (2006) Ant Colony Optimization. IEEE Computational Intelligence Magazine, 1, 28-39.
https://doi.org/10.1109/MCI.2006.329691
[5] Karaboga, D. (2010) Artificial Bee Colony Algorithm. Scholarpedia, 5, Article No. 6915.
https://doi.org/10.4249/scholarpedia.6915
[6] Mareli, M. and Twala, B. (2018) An Adaptive Cuckoo Search Algorithm for Optimization. Applied Computing and Informatics, 14, 107-115.
https://doi.org/10.1016/j.aci.2017.09.001
[7] Połap, D. and Woźniak, M. (2017) Polar Bear Optimization Algorithm: Meta-Heuristic with Fast Population Movement and dynamic Birth and Death Mechanism. Symmetry, 9, Article 203.
https://doi.org/10.3390/sym9100203
[8] Mirjalili, S., Mirjalili, S.M. and Lewis, A. (2014) Grey Wolf Optimizer. Advances in Engineering Software, 69, 46-61.
https://doi.org/10.1016/j.advengsoft.2013.12.007
[9] Mirjalili, S. and Lewis, A. (2016) The Whale Optimization Algorithm. Advances in Engineering Software, 95, 51-67.
https://doi.org/10.1016/j.advengsoft.2016.01.008
[10] Mirjalili, S., et al. (2017) Salp Swarm Algorithm: A Bio-Inspired Optimizer for Engineering Design Problems. Advances in Engineering Software, 114, 163-191.
https://doi.org/10.1016/j.advengsoft.2017.07.002
[11] Thawkar, S. (2021) A Hybrid Model Using Teaching-Learning-Based Optimization and Salp Swarm Algorithm for Feature Selection and Classification in Digital Mammography. Journal of Ambient Intelligence and Humanized Computing, 12, 8793-8808.
https://doi.org/10.1007/s12652-020-02662-z
[12] Neggaz, N., Ewees, A.A., Abd Elaziz, M. and Mafarja, M. (2020) Boosting Salp Swarm Algorithm by Sine Cosine Algorithm and Disrupt Operator for Feature Selection. Expert Systems with Applications, 145, Article ID: 113103.
https://doi.org/10.1016/j.eswa.2019.113103
[13] Zhang, H., et al. (2022) A Multi-Strategy Enhanced Salp Swarm Algorithm for Global Optimization. Engineering with Computers, 38, 1177-1203.
https://doi.org/10.1007/s00366-020-01099-4
[14] Zhang, H., et al. (2023) Differential Evolution-Assisted Salp Swarm Algorithm with Chaotic Structure for Real-World Problems. Engineering with Computers, 39, 1735-1769.
https://doi.org/10.1007/s00366-021-01545-x
[15] Chen, L.X. and Mu, Y.M. (2021) Improved Salp Swarm Algorithm. Application Research of Computers, 38, 1648-1652. (In Chinese)
[16] Yang, G.Y., Wu, D.F., Liu, F.K. and Xu, T.Q. (2023) Improved Salp Swarm Algorithm with Multi-Strategy. Application Research of Computers, 40, 704-709. (In Chinese)
[17] Liu, J.S., Yuan, M.M. and Li, Y. (2022) Robot Path Planning Based on Improved Salp Swarm Algorithm. Journal of Computer Research and Development, 59, 1297-1314. (In Chinese)
[18] Zhang, Y. and Qin, L.X. (2020) Improved Salp Swarm Algorithm Based on Levy Flight Strategy. Computer Science, 47, 154-160. (In Chinese)
[19] Xie, C. and Zheng, H.Q. (2022) A Novel Salpa Warm Algorithm and Application. Computer Engineering & Science, 44, 84-190.
https://doi.org/10.54097/hset.v24i.3896
[20] Wu, G.H., Mallipeddi, R. and Suganthan, P.N. (2017) Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization. Technical Report.
https://www.researchgate.net/publication/317228117
[21] Zhang, J. and Wang, J.-S. (2020) Improved Salp Swarm Algorithm Based on Levy Flight and Sine Cosine Operator. IEEE Access, 8, 99740-99771.
https://doi.org/10.1109/ACCESS.2020.2997783
[22] Mantegna, R.N. (1994) Fast, Accurate Algorithm for Numerical Simulation of Levy Stable Stochastic Processes. Physical Review E, 49, 4677-4683.
https://doi.org/10.1103/PhysRevE.49.4677
[23] Yao, X., Liu, Y. and Lin, G. (1999) Evolutionary Programming Made Faster. IEEE Transactions on Evolutionary Computation, 3, 82-102.
https://doi.org/10.1109/4235.771163
[24] Yildiz, A.R. (2019) A Novel Hybrid Whale-Nelder-Mead Algorithm for Optimization of Design and Manufacturing Problems. The International Journal of Advanced Manufacturing Technology, 105, 5091-5104.
https://doi.org/10.1007/s00170-019-04532-1
[25] Saremi, S., Mirjalili, S. and Lewis, A. (2017) Grasshopper Optimization Algorithm: Theory and Application. Advances in Engineering Software, 105, 30-47.
https://doi.org/10.1016/j.advengsoft.2017.01.004
[26] Zhao, S., et al. (2022) Elite Dominance Scheme Ingrained adaptive Salp Swarm Algorithm: A Comprehensive Study. Engineering with Computers, 38, 4501-4528.
https://doi.org/10.1007/s00366-021-01464-x
[27] Askari, Q., Saeed, M. and Younas, I. (2020) Heap-Based Optimizer Inspired by Corporate Rank Hierarchy for Global Optimization. Expert Systems with Applications, 161, Article ID: 113702.
https://doi.org/10.1016/j.eswa.2020.113702

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.