Optimizing Grey Wolf Optimization: A Novel Agents’ Positions Updating Technique for Enhanced Efficiency and Performance ()

Mahmoud Khatab^{1}, Mohamed El-Gamel^{1}, Ahmed I. Saleh^{2}, Asmaa H. Rabie^{2}, Atallah El-Shenawy^{1,3}

^{1}Department of Mathematics and Engineering Physics, Faculty of Engineering, Mansoura University, Mansoura, Egypt.

^{2}Department of Computers and Control, Faculty of Engineering, Mansoura University, Mansoura, Egypt.

^{3}Department of Mathematics, Faculty of Science, New Mansoura University, Mansoura, Egypt.

**DOI: **10.4236/ojop.2024.131002
PDF
HTML XML
36
Downloads
182
Views
Citations

Grey Wolf Optimization (GWO) is a nature-inspired metaheuristic algorithm that has gained popularity for solving optimization problems. In GWO, the success of the algorithm heavily relies on the efficient updating of the agents’ positions relative to the leader wolves. In this paper, we provide a brief overview of the Grey Wolf Optimization technique and its significance in solving complex optimization problems. Building upon the foundation of GWO, we introduce a novel technique for updating agents’ positions, which aims to enhance the algorithm’s effectiveness and efficiency. To evaluate the performance of our proposed approach, we conduct comprehensive experiments and compare the results with the original Grey Wolf Optimization technique. Our comparative analysis demonstrates that the proposed technique achieves superior optimization outcomes. These findings underscore the potential of our approach in addressing optimization challenges effectively and efficiently, making it a valuable contribution to the field of optimization algorithms.

Keywords

Grey Wolf Optimization (GWO), Metaheuristic Algorithm, Optimization Problems, Agents’ Positions, Leader Wolves, Optimal Fitness Values, Optimization Challenges

Share and Cite:

Khatab, M. , El-Gamel, M. , Saleh, A. , Rabie, A. and El-Shenawy, A. (2024) Optimizing Grey Wolf Optimization: A Novel Agents’ Positions Updating Technique for Enhanced Efficiency and Performance. *Open Journal of Optimization*, **13**, 21-30. doi: 10.4236/ojop.2024.131002.

1. Introduction

Nature-Inspired Meta-Heuristic techniques have revolutionized the landscape of computational problem-solving by harnessing the inherent balance and efficiency found in the natural world. These innovative algorithms draw inspiration from the intricate workings of nature’s systems and have emerged as powerful tools across various domains [1] . Among these algorithms, the Grey Wolf Optimization (GWO) algorithm [2] , introduced by Mir Jalili *et al.* in 2014, has garnered substantial recognition and acclaim. Natural Computing in the realm of meta-heuristic techniques has opened doors to novel ways of tackling complex problems [3] . These approaches mimic the processes observed in the natural world, such as the behavior of animals, the growth of plants, or the dynamics of ecosystems. These algorithms offer unique advantages, as they can adapt and optimize, much like their natural counterparts. Evolutionary algorithms, a subset of these meta-heuristic techniques, have been particularly effective in solving multi-objective problems [4] . These algorithms are inspired by the principles of natural selection and genetic variation, and they excel at exploring diverse solution spaces to find optimal or near-optimal solutions for problems with multiple conflicting objectives.

The Grey Wolf Optimizer (GWO) has emerged as a versatile and powerful metaheuristic algorithm renowned for its ability to tackle a wide array of optimization problems. Its adaptability and efficiency have earned it a prominent place in the toolbox of researchers and practitioners across diverse domains. In recent years, GWO has found applications in numerous practical scenarios, showcasing its effectiveness in addressing real-world challenges. For instance, it has been extensively applied in the realm of robotics [5] , demonstrating its prowess in robot path planning, where it aids in finding optimal paths for robots navigating complex environments. Furthermore, GWO has played a pivotal role in improving scheduling algorithms for crowdsourcing applications in mobile edge computing [6] , optimizing task allocation and resource management to enhance overall system performance. Its capabilities extend even further into scientific and engineering domains, such as in the parameter estimation of solid oxide fuel cell models, where the improved chaotic Grey Wolf Optimization Algorithm has proven to be a novel and efficient method [7] . Additionally, GWO has been instrumental in addressing planning problems within smart grids, optimizing resource allocation, and enhancing grid efficiency [8] . Notably, researchers have ventured into quantum-inspired variants of GWO, introducing innovative approaches to optimization problems and expanding the algorithm’s horizons [9] . These multifaceted applications underscore the versatility, adaptability, and potential of the Grey Wolf Optimizer, making it an invaluable tool for addressing a wide range of complex problems in today’s evolving technological landscape.

In this context, Grey Wolf Optimization (GWO) stands out as an algorithm that closely emulates the social hierarchy and hunting behavior of grey wolves. It manifests an inherent ability to adapt and optimize, making it a valuable tool in the world of optimization. At its core, GWO revolves around the emulation of the pack’s hunting dynamics, with a keen focus on the interplay between leader wolves and their prey.

The key principle that underlies GWO is the precise and efficient updating agents’ positions. This pivotal step dictates its capacity to attain optimal fitness values. As a result, the refinement and enhancement of this crucial aspect have remained at the forefront of research efforts in the field of optimization. In this paper, we embark on a journey to delve deeper into the realm of Grey Wolf Optimization, exploring its theoretical underpinnings and practical implications in solving intricate optimization problems. The motivation behind this endeavor lies in the continuous pursuit of improving optimization algorithms’ performance and applicability in addressing real-world challenges. In the subsequent sections, we provide an extensive overview of the Grey Wolf Optimization technique, elucidating its conceptual framework and mechanisms. While acknowledging its remarkable capabilities, we also recognize the scope for further enhancement. Building upon the foundational principles of GWO, we introduce a novel technique aimed at refining the process of updating agents’ positions. The rationale for this innovation is rooted in the pursuit of maximizing the algorithm’s effectiveness and efficiency, attributes that are indispensable for tackling complex optimization problems efficiently. Our proposed technique is meticulously designed to fine-tune the dynamics of GWO, amplifying its problem-solving capabilities. To substantiate the effectiveness of our proposed approach, we present the results of a comprehensive series of experiments conducted under diverse optimization scenarios.

The crux of this paper lies in a rigorous comparative analysis between our novel technique and the conventional Grey Wolf Optimization method. This analysis is vital for empirically assessing the performance gains and computational efficiency offered by our approach. Preliminary findings indicate that our proposed technique surpasses the performance benchmarks set by traditional GWO. These outcomes underscore the immense potential of our approach in effectively addressing optimization challenges, offering promising avenues for its application across various domains.

As the quest for optimization solutions continues to intensify across industries and research disciplines, our contributions in this paper aim to further the understanding and efficacy of Grey Wolf Optimization. By optimizing the algorithm’s critical agents’ positions updating process, we aim to facilitate more efficient and effective optimization, making a significant stride in the ever-evolving field of optimization algorithms.

2. Problem Definition and Suggested Solution

The primary issue with the Grey Wolf Optimization (GWO) algorithm lies in its approach to update the position of the agents. GWO traditionally suggests placing the new position at the average point of the three vectors calculated with respect to the leader wolves (alpha, beta, and delta). However, this approach lacks logical coherence, as the average point remains uniform across the three leaders, with no apparent advantage observed for any one of them. It is more reasonable to locate the new position in a manner that maximizes its proximity to the alpha wolf while maintaining relative distances from the beta and delta wolves. Specifically, the distance to the alpha wolf (referred to as “*a*”) should be less than the distance to the beta wolf (“*b*”), which in turn should be less than the distance to the delta wolf (“*c*”). This configuration ensures that the new position is closest to the alpha wolf, relatively close to the beta wolf, and farthest from the delta wolf.

Addressing this problem involves two key tasks: Localizing the Critical Point: The first task is to determine the precise location of the critical point where the agents in the coming iteration should be positioned within the GWO algorithm. This critical point, ideally situated to maximize the algorithm’s efficiency, needs to be identified through a systematic and precise method.

Optimizing the Distance Ratios (*a*, *b*, *c*): The second task involves determining the optimal ratio between the three distances (*a*, *b*, and *c*). Finding the right balance between these distances is crucial for the algorithm’s performance. This optimization seeks to ensure that the new position adheres to the logical hierarchy of distances from the leader wolves. By addressing these tasks, we aim to refine and enhance the updating step within the GWO algorithm. This approach holds the potential to significantly improve the algorithm’s efficiency and effectiveness in solving optimization problems, offering a more coherent and biologically inspired solution.

3. Technique of Updating Positions

In this section, we introduce an innovative approach designed to enhance the efficiency and effectiveness of the Grey Wolf Optimization (GWO) algorithm by revisiting the pivotal step of updating agents’ positions.

3.1. Objectives of Algorithm

Regarding the coordinates of best three agent fitness value ${x}_{alpha}$ , ${x}_{beta}$ , and ${x}_{delta}$ , respectively.

The primary objective of our proposed technique is to replace the conventional practice of positioning wolves at the average point among the three coordinates
${x}_{i}^{alpha}$
${x}_{i}^{beta}$ and
${x}_{i}^{delta}$ where
${x}_{i}^{alpha}$
${x}_{i}^{beta}$ and
${x}_{i}^{delta}$ are the relative coordinates of the wolf current position with respect to the three leaders wolves (alpha, beta, and delta) and the index “*I*” refers to a certain wolf.

${x}_{i}^{alpha}={x}_{alpha}-{A}_{1}{D}_{alpha}$ (1)

${D}_{alpha}=\left|{C}_{1}{x}_{alpha}-{x}_{i}^{k}\right|$ (2)

${C}_{1}=2\ast rnd$ (3)

${A}_{1}=2\ast a\ast rnd-a$ (4)

where
${x}_{i}^{k}$ refers to the position of a certain wolf “*i*” in the iteration number “*k*”

*a* = (number of iterations −1) for the first iteration, and
$0\le rnd\le 1$ .

Applying the same steps to get ${x}_{i}^{beta}$ , and ${x}_{i}^{delta}$ .

In conventional GWO algorithm the new position of a certain wolf is calculated as following:

${x}_{i}^{k+1}=\frac{{x}_{i}^{alpha}+{x}_{i}^{beta}+{x}_{i}^{delta}}{3}$ (5)

where
${x}_{i}^{k+1}$ is the new position of the same wolf “*i*” with a more refined and biologically inspired approach. This new positioning strategy prioritizes the alpha wolf with the highest emphasis, followed by a slightly reduced priority on the beta wolf, and lastly the delta wolf. The fundamental aim is to align the new position within the hierarchy of the wolf pack, thus optimizing its proximity to the alpha wolf while maintaining appropriate relative distances from the beta and delta wolves.

3.2. Methodology

To achieve this objective, we employ a rigorous mathematical analysis. Our approach entails calculating the optimal coordinates for the wolf new position, instead of having the average of ${x}_{i}^{alpha}$ ${x}_{i}^{beta}$ and ${x}_{i}^{delta}$ , giving priority to ${x}_{i}^{alpha}$ then ${x}_{i}^{beta}$ .

3.3. New Position Calculation

The calculation of the agent’s new position involves determining a point that fulfills the following criteria:

Distance “*a*” (to the alpha wolf) is minimized to position the new location of the agents closest to the alpha wolf.

Distance “*b*” (to the beta wolf) is adjusted to provide a slightly lower priority than “*a*”.

Distance “*c*” (to the delta wolf) is maximized to maintain the relative distance hierarchy.

Additionally, we delve into the optimization of the ratios between these distances “*a*”, “*b*”, and “*c*”. Seeking the optimal balance between these ratios further enhances the precision of our proposed technique. By systematically solving these mathematical equations and optimizing the parameters involved, our proposed technique ensures a more biologically plausible and efficient approach for updating the agents’ positions within the GWO algorithm. In the subsequent sections, we present the mathematical formulations and detailed algorithms utilized in our approach, along with empirical results that highlight the advantages of this novel technique in terms of optimization outcomes and computational efficiency.

In GWO, in case of minimization problem the fitness values *F* of the three leader wolves are as following:
${F}_{alpha\text{\hspace{0.17em}}wolf}<{F}_{beta\text{\hspace{0.17em}}wolf}<{F}_{delta\text{\hspace{0.17em}}wolf}$ .

To ensure that, the distance *a* < distance *b* < distance *c*, we can make them relative to the fitness values of the leader wolves as following:

$a:b:c=\frac{{F}_{alpha\text{}wolf}}{{F}_{delta\text{}wolf}}:\frac{{F}_{beta\text{}wolf}}{{F}_{delta\text{}wolf}}:1$ (6)

The three vectors
$\left({x}_{1},{y}_{1}\right)$ ,
$\left({x}_{2},{y}_{2}\right)$ , and
$\left({x}_{3},{y}_{3}\right)$ represents
${x}_{i}^{alpha}$
${x}_{i}^{beta}$ and
${x}_{i}^{delta}$ which are related to the three leaders alpha wolf, beta wolf, and delta wolf respectively and their values differ from one agent to another. The new position
${x}_{i}^{k+1}$ is (*x*, *y*) for a certain agent can be calculated mathematically as follows:

${\left({x}_{1}-x\right)}^{2}+{\left({y}_{1}-y\right)}^{2}={a}^{2}$ (7)

${\left({x}_{2}-x\right)}^{2}+{\left({y}_{2}-y\right)}^{2}={b}^{2}$ (8)

${\left({x}_{3}-x\right)}^{2}+{\left({y}_{3}-y\right)}^{2}={c}^{2}$ (9)

Each equation represents a circle equation, and we solve for the value of *x* and *y*. Actually, the solution of this system is not always real numbers, but the previous analysis gives us a hint for an alternative approach.

It is required to obtain a relation for the proposed point “*q*” to obtain gradual increase of the three vectors, which makes sense and gives priority to the alpha wolf. It’s a weighted average of the point of the three leader wolves. First, we discuss the power of the weighted average in shaping the distances and then we delve to the appropriate values of this weights.

“*p*” represents the new position in case of traditional GWO, however “*q*” represents the new position in case of our new proposed technique.

As shown in Table 1, for five random positioned of leader wolves, point “*p*” which is the new position of the agents in case of traditional GWO technique. It is calculated as the average of the three leaders which are alpha wolf
$\left({x}_{1},{y}_{1}\right)$ , beta wolf
$\left({x}_{2},{y}_{2}\right)$ , and delta wolf
$\left({x}_{3},{y}_{3}\right)$ as following:

$p=\frac{\left({x}_{1},{y}_{1}\right)+\left({x}_{2},{y}_{2}\right)+\left({x}_{3},{y}_{3}\right)}{3}$ (10)

$\text{Distancetoalphawolf}=\sqrt{{\left({x}_{1}-x\right)}^{2}+{\left({y}_{1}-y\right)}^{2}}$ (11)

$\text{Distancetobetawolf}=\sqrt{{\left({x}_{2}-x\right)}^{2}+{\left({y}_{2}-y\right)}^{2}}$ (12)

$\text{Distancetodeltawolf}=\sqrt{{\left({x}_{3}-x\right)}^{2}+{\left({y}_{3}-y\right)}^{2}}$ (13)

Table 1. Comparison of the distances in case of traditional GWO and the new proposed technique.

Point “*q*” which is the new proposed position of the agents can be calculated as following:

$q=\frac{\frac{1}{{r}_{1}}\ast \left({x}_{1},{y}_{1}\right)+\frac{1}{{r}_{2}}\ast \left({x}_{2},{y}_{2}\right)+\frac{1}{{r}_{3}}\ast ({x}_{3},{y}_{3})}{\frac{1}{{r}_{1}}+\frac{1}{{r}_{2}}+\frac{1}{{r}_{3}}}$ (14)

where *r*_{1}, *r*_{2}, and *r*_{3} are values satisfying
${r}_{1}<{r}_{2}<{r}_{3}$ . In this table,
${r}_{1}=0.1$ ,
${r}_{2}=0.3$ , and
${r}_{3}=0.6$ .

As shown in Table 1, the distances between the new position of a certain wolf and the three points ${x}_{i}^{alpha}$ , ${x}_{i}^{beta}$ , and ${x}_{i}^{delta}$ in case of the traditional grey wolf technique are not guaranteed to have a gradual increase from alpha wolf and beta wolf, ending by delta wolf.

On the other hand, in our new proposed technique, there is a gradual increase from alpha wolf and beta wolf, ending by delta wolf.

The values of, *r*_{1}, *r*_{2}, and *r*_{3} are very critical and have to be optimized for best performance, so different cases of *r*_{1}, *r*_{2}, and *r*_{3} are set to compare between then regarding the fitness value. One of a suggested values of the three ratios *r*_{1}, *r*_{2}, and *r*_{3} is the ratio between the fitness value of the three leader wolves.

${r}_{1}:{r}_{2}:{r}_{3}=\frac{{F}_{alpha\text{}wolf}}{{F}_{delta\text{}wolf}}:\frac{{F}_{beta\text{}wolf}}{{F}_{delta\text{}wolf}}:1$ (15)

But there is no guarantee that the previous ratio is the best ratio to obtain the best final value of the model so, a general formula in Equation (16) was formulated to cover all possibilities and ensure best ratio.

$q=\frac{\frac{1}{k}\ast \left({x}_{1},{y}_{1}\right)+\frac{1}{m}\ast \left({x}_{2},{y}_{2}\right)+\frac{1}{n}\ast \left({x}_{3},{y}_{3}\right)}{\frac{1}{k}+\frac{1}{m}+\frac{1}{n}}$ (16)

where $k,m,n$ have values from 0.1 to 1 step size 0.1.

Using three nested loops to iterate over all possible vector of $\left(k,m,n\right)$ where $k<m<n$ to ensure that $\frac{1}{k}>\frac{1}{m}>\frac{1}{n}$ .

4. The Used Benchmark Function

In this section, 10 benchmark functions used are presented in Table 2. Table 2 includes the 10 benchmark (objective) functions, their normal range, their dimension, and the minimum fitness values of them (*F*_{minimum}). These 10 benchmark functions are represented from *F*_{1} to *F*_{10}.

5. Comparing the Traditional GWO with Our New Enhancement Based on Ten Benchmark Functions

The condition of the code running is as following: the maximum iterations are 500, and the number of search agents is 30. The initial random positions, as well as the random parameters in Equations (3), and (4) are fixed to be 0.33 for the traditional GWO and the new proposed technique to ensure that the only parameter affect the model efficiency is the way of positioning the wolves in each iteration. Table 3 shows a comparison between the traditional GWO and the new proposed technique at its best ratio for *k*, *m*, and *n*._{ }

Table 2. Benchmark functions.

Table 3. Results of traditional GWO and the proposed enhancement using 10 benchmark functions at 30 search agents.

As shown in Table 3, the new proposed technique shows a significant improvement in the minimum value obtained from the model. In some cases, the reduction reaches 97% as in case of function 7. This improvement has a great impact in come critical fields such as: aerodynamics, space technology, and medical fields.

6. Conclusion and Future Work

In conclusion, Grey Wolf Optimization (GWO) has established itself as a robust and effective optimization algorithm, drawing inspiration from the coordinated hunting behavior of grey wolves. Over the years, it has proven its mettle in solving complex optimization problems, showcasing adaptability and efficiency. In this paper, we embarked on a comprehensive exploration of GWO, shedding light on its theoretical foundations, mechanisms, and practical applications. While recognizing the substantial achievements of GWO, we introduced a novel technique aimed at enhancing its performance by refining the process of updating the position of the agents. Our experimental results have demonstrated the superiority of our proposed approach over the traditional GWO method.

In the future, we can refine our proposed technique by optimizing its parameters and mechanisms for various problem domains. Exploring hybrid approaches with other optimization algorithms offers potential for improved results and increased robustness. Adapting GWO for multi-objective tasks addresses real-world complexity. Investigating parallel computing enhances scalability, enabling larger challenges. Applying our technique to practical problems in fields like engineering and finance validates its effectiveness. A thorough theoretical analysis will provide deeper insights, and user-friendly software tools can broaden its adoption, advancing optimization algorithms.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Manika, S. and Prableen, K. (2021) A Comprehensive Analysis of Nature-Inspired Meta-Heuristic Techniques for Feature Selection Problem. Archives of Computational Methods in Engineering, 28, 1103-1127. https://doi.org/10.1007/s11831-020-09412-6 |

[2] |
Mirjalili, S., Lewis, A. and Mirjalili, S.M. (2014) Grey Wolf Optimizer. Advances in Engineering Software, 69, 46-61. https://doi.org/10.1016/j.advengsoft.2013.12.007 |

[3] | Grzegorz, R., Thomas, B. and Joost, K. (2012) Handbook of Natural Computing. Springer, New York |

[4] | Carlos, C., David, V. and Gary, L. (2007) Evolutionary Algorithms for Solving Multi-Objective Problems Second Edition. Springer, New York. |

[5] |
Ou, Y., Yin, P. and Mo, L. (2023) An Improved Grey Wolf Optimizer and Its Application in Robot Path Planning. Biomimetics, 8, Article 84. https://doi.org/10.3390/biomimetics8010084 |

[6] |
Lian, Z., Shu, J., Zhang, Y., et al. (2023) Convergent Grey Wolf Optimizer Metaheuristics for Scheduling Crowdsourcing Applications in Mobile Edge Computing. IEEE Internet of Things Journal, 11, 1866-1879. https://doi.org/10.1109/JIOT.2023.3304909 |

[7] |
Hao, P. and Sobhani, B. (2021) Application of the Improved Chaotic Grey Wolf Optimization Algorithm as a Novel and Efficient Method for Parameter Estimation of Solid Oxide Fuel Cells Model. International Journal of Hydrogen Energy, 46, 36454-36465. https://doi.org/10.1016/j.ijhydene.2021.08.174 |

[8] |
Ahmadi, B., Younesi, S., Ceylan, O., et al. (2022) An Advanced Grey Wolf Optimization Algorithm and Its Application to Planning Problem in Smart Grids. Soft Computing, 26, 3789-3808. https://doi.org/10.1007/s00500-022-06767-9 |

[9] |
Deshmukh, N., Vaze, R., Kumar, R., et al. (2022) Quantum Entanglement Inspired Grey Wolf Optimization Algorithm and Its Application. Evolutionary Intelligence, 16, 1097-1114. https://doi.org/10.1007/s12065-022-00721-2 |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.