Next Article in Journal / Special Issue
Crisscross Moss Growth Optimization: An Enhanced Bio-Inspired Algorithm for Global Production and Optimization
Previous Article in Journal
Flexible Model Predictive Control for Bounded Gait Generation in Humanoid Robots
Previous Article in Special Issue
An Improved Human Evolution Optimization Algorithm for Unmanned Aerial Vehicle 3D Trajectory Planning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Improved Red-Tailed Hawk Algorithm for Real-Environment Unmanned Aerial Vehicle Path Planning

1
Laboratory for Robot Mobility Localization and Scene Deep Learning Technology, Guizhou Equipment Manufacturing Polytechnic, Guiyang 550025, China
2
State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Submission received: 9 December 2024 / Revised: 31 December 2024 / Accepted: 2 January 2025 / Published: 6 January 2025

Abstract

:
In recent years, unmanned aerial vehicle (UAV) technology has advanced significantly, enabling its widespread use in critical applications such as surveillance, search and rescue, and environmental monitoring. However, planning reliable, safe, and economical paths for UAVs in real-world environments remains a significant challenge. In this paper, we propose a multi-strategy improved red-tailed hawk (IRTH) algorithm for UAV path planning in real environments. First, we enhance the quality of the initial population in the algorithm by using a stochastic reverse learning strategy based on Bernoulli mapping. Then, the quality of the initial population is further improved through a dynamic position update optimization strategy based on stochastic mean fusion, which enhances the exploration capabilities of the algorithm and helps it explore promising solution spaces more effectively. Additionally, we proposed an optimization method for frontier position updates based on a trust domain, which better balances exploration and exploitation. To evaluate the effectiveness of the proposed algorithm, we compare it with 11 other algorithms using the IEEE CEC2017 test set and perform statistical analysis to assess differences. The experimental results demonstrate that the IRTH algorithm yields competitive performance. Finally, to validate its applicability in real-world scenarios, we apply the IRTH algorithm to the UAV path-planning problem in practical environments, achieving improved results and successfully performing path planning for UAVs.

1. Introduction

Unmanned aerial vehicle (UAV) technology is of paramount importance at the present time. In the military field, it is a key force in modern warfare. Reconnaissance UAVs can break through high-risk environments to obtain key information, while attack UAVs can realize precision strikes and reduce casualties, greatly changing military strategies and combat modes [1]. In the civil field, UAVs can efficiently obtain high-precision data in geographic surveying and mapping, solving traditional surveying and mapping problems [2]. In environmental monitoring, they can penetrate into complex zones to provide timely and accurate information for environmental protection and resource management. In agricultural production, they can promote the development of precision agriculture and improve yield and quality [3], and they can quickly respond to emergency rescues to rapidly assess the situation of a disaster, search for survivors, and buy time for rescue operations [4].
Path planning, as a key technology for UAVs to perform tasks, is the cornerstone to guarantee the safe, efficient, and reliable operation of UAVs, and provides solid support for the wide application and expansion of UAV technology in various fields. In complex practical application scenarios, whether it is military reconnaissance and strike missions, or actions such as geographic surveying and mapping [5], environmental monitoring [6], agricultural operations [7], and emergency rescue [4] in the civil field, reasonable path planning is the primary prerequisite to ensure that the UAV completes its tasks successfully. It is required in the intelligent navigation of the UAV flight, determining the optimal flight route of the UAV from the starting point to the target point. Accurate path-planning technology enables the UAV to flexibly navigate in an environment full of obstacles, such as when performing tasks in areas with tall buildings in the city, skillfully avoiding all kinds of buildings, avoiding the risk of collision, and guaranteeing flight safety [8]. At the same time, in the face of multiple mission requirements and complex environmental constraints, good path planning can enable the UAV to fly in the most energy-efficient way, effectively extending the endurance time and improving the efficiency of mission execution. In addition, for the concealment needs in military operations [9], path-planning technology can help UAVs choose routes that are not easily detected by the enemy, enhancing the confidentiality and success rate of military operations.
Path-planning algorithms can be mainly classified into two categories: traditional algorithms and intelligent optimization algorithms. The traditional path-planning algorithms mainly include three kinds: Dijkstra’s algorithm [10], A* algorithm [11], and Floyd’s algorithm [12]. Among them, Dijkstra’s algorithm iteratively calculates the shortest paths to other nodes starting from the initial node, it is a deterministic global optimal search but lacks heuristic information to guide it, which may result in excessive computation in large, complex environments [13]. The A star algorithm is an algorithm based on heuristic information, which reduces unnecessary node expansion by guiding the search through a heuristic function [14]. The time complexity is related to the quality of the heuristic function, which degenerates to Dijkstra’s algorithm in the worst case. Therefore, the A star algorithm relies heavily on the definition and consistency of the heuristic function. Floyd’s algorithm is used to find the shortest path between all vertex pairs by considering the intermediate node update information. Although its computational complexity is high, it can handle complex path relationships and is suitable for scenarios that require comprehensive shortest path information.
However, traditional algorithms often struggle with complex terrain, dynamic obstacles, and multiple constraints, while intelligent optimization algorithms can effectively find feasible paths through mechanisms such as the evolution in genetic algorithms and the group collaboration in particle swarm algorithms. The intelligent optimization algorithms mainly include classical genetic algorithms, particle swarm optimization algorithms, and ant colony algorithms, as well as the newly proposed Neural Dynamics Optimization [15], Enterprise Development Optimization [16], and Polar Lights Optimization [17]. In recent years, with the rise of various types of optimization problems, intelligent optimization algorithms have made remarkable advancements. On one hand, classical intelligent optimization algorithms have been continuously improved and refined. For example, on the basis of retaining its core idea of simulating biological evolution, genetic algorithms have improved convergence speed and optimization accuracy by refining genetic operations and adjusting parameter settings, making them more efficient in solving complex optimization problems [18]. The particle swarm optimization algorithm is also constantly exploring new learning strategies and parameter adaptive mechanisms to enhance the performance and adaptability of the algorithm to cope with different types of problems [19]. The ant colony algorithm, on the other hand, has carried out in-depth research on pheromone updating rules and path-searching strategies to further improve the quality and efficiency of solutions [20].
On the other hand, new intelligent optimization algorithms keep emerging. For example, the neural population dynamic optimization algorithm [15], a new type of intelligent optimization algorithm, uses an attractor trend strategy to guide the neural population toward making optimal decisions, ensuring the algorithm’s exploitation ability. It diverges from the neural population and the attractor by coupling with other neural populations, enhancing the algorithm’s exploration ability. Finally, the information projection strategy is used to control communication between the neural populations, facilitating the transition from exploration to exploitation, and offering new ideas and methods for solving complex optimization problems. Moreover, Hashim, Fatma A. et al. proposed the Archimedes optimization algorithm (AOA) based on the Archimedes principle [21]. The algorithm optimizes by simulating the principle of buoyancy force applied upward on an object partially or completely submerged in a fluid, and it is shown through experiments on CEC2017 and four engineering design problems that the AOA algorithm is a high-performance optimization tool for solving complex problems efficiently. Some hybrid intelligent optimization algorithms also combine different approaches, leveraging the strengths of various algorithms, overcoming the shortcomings of a single approach, and demonstrating strong performance in solving complex multimodal and constrained optimization problems. Meanwhile, the application fields of intelligent optimization algorithms are continuously expanding. In logistics, intelligent optimization algorithms are widely used for path planning, warehouse layout optimization, and distribution route optimization to help enterprises improve operational efficiency and reduce costs. In the energy sector, intelligent optimization algorithms are used to optimize integrated energy systems and microgrid scheduling, achieving efficient use and reasonable distribution of energy. Intelligent optimization algorithms also play an increasingly important role in engineering design, machine learning, artificial intelligence, and other fields, providing strong support for solving various complex practical problems. Table 1 summarizes the most recent optimization algorithms.
The RTH algorithm [35], as one of the intelligent optimization algorithms, has been widely used in various fields since it was proposed. Azeddine Houari and his team applied the RTH algorithm to extract parameters for proton exchange membrane fuel cells and tested it on seven real-world engineering problems, benchmarking it against other published algorithms. The experimental results indicated that RTH outperforms most other methods in the majority of cases. Furthermore, to address the issue of conventional MPPT being unable to differentiate between local and global MPP, Almousa, MT, and colleagues introduced a single-sensor global MPPT method based on the RTH algorithm for PV systems connected via DC links, operating under PSCs [36]. This method effectively reduces the number of sensors and decreases the controller’s cost. However, as the NFL theorem suggests that no algorithm performs optimally across all optimization problems, it is essential to refine the algorithm for our specific optimization challenge. Qin, XQ et al. proposed an enhanced RTH algorithm (ERTH) with multiple elite strategies and chaotic mapping for solving multi-cost optimization problems in cloud task scheduling [37]. The ERTH algorithm achieved competitive results through experimental validation.
Based on the above research, this paper proposes a multi-strategy improved RTH algorithm (IRTH). The specific contributions are as follows.
  • A stochastic reverse learning strategy based on Bernoulli mapping is utilized to enhance the quality of the population, allowing the algorithm to explore more promising spaces.
  • The dynamic position update optimization strategy using stochastic mean fusion makes the algorithm less likely to fall into a local optimal solution during exploration and increases the probability that the algorithm will find a globally optimal solution.
  • The convergence speed of the algorithm is improved using a trust domain-based optimization method for frontier position updating, which employs a dynamic trust domain radius to provide a trade-off between convergence speed and accuracy, achieving better performance.
  • The algorithms were qualitatively analyzed using 29 test functions from the IEEE CEC2017 test set and compared with 11 other algorithms to obtain competitive results. Most importantly, the algorithms were statistically analyzed to fully analyze the superior performance of IRTH.
  • The IRTH algorithm is applied to the UAV path-planning problem in a real environment and compared with other comparative algorithms.
The next part of this paper is organized as follows: Section 2 gives a brief introduction to the original RTH algorithm; Section 3 gives a detailed introduction of the trust domain approach and other improvement strategy proposed in this paper; in Section 4, we apply the IRTH algorithm in numerical optimization experiments and analyze the experimental results in detail; in Section 5, we apply the algorithm to the real-environment UAV planning problem and provide a comprehensive analysis of its advantages and disadvantages; in Section 6, we summarize and outlook the work in this paper to clarify the direction of future work.

2. Red-Tailed Hawk (RTH) Algorithm

In this section, since the algorithm proposed in this paper is an improvement on the RTH algorithm, we provide a brief description of the RTH algorithm. The RTH algorithm is inspired by the hunting behavior of the red-tailed hawk. During hunting, the red-tailed hawk goes through three phases: high soaring, low soaring, and swooping. The mathematical model of the three parts is as follows:

2.1. High-Soaring Stage

During a red-tailed hawk’s hunt, it soars through the sky in order to better find food. This behavior is modeled as shown in Equation (1).
X ( t ) = X b e s t + ( X m e a n X ( t 1 ) ) L e v y ( d i m ) T F ( t ) ,
where X ( t ) denotes the current position of the red-tailed hawk, X b e s t indicates the current optimal position, X m e a n indicates the current average position, L e v y ( d i m ) denotes the Levy flight distribution function, which is determined using Equation (2), and T F ( t ) represents the transition factor function, calculated using Equation (4).
L e v y ( dim ) = s µ σ | v | β 1 ,
where s and β is a constant, the value is 0.01 and 1.5, respectively. dim is the problem dimension, and µ and v are random numbers belonging to the range [0 1]. σ is calculated according to Equation (3).
σ = ( г ( 1 + β ) sin ( π β 2 ) г ( 1 + β 2 ) β 2 ( 1 β 2 ) ) ,
T F ( t ) = 1 + sin ( 2.5 + ( t T m a x ) ) ,
where T m a x denotes the maximum number of iterations.

2.2. Low-Soaring Stage

After finding suitable prey through the high-soaring phase, the red-tailed hawk will fly low to lock on to the prey for better hunting. This behavior is modeled as shown in Equation (5).
X ( t ) = X b e s t + ( x ( t ) + y ( t ) ) S t e p S i z e ( t ) ,
where S t e p S i z e ( t ) is calculated according to Equation (6).
S t e p S i z e ( t ) = X ( t ) X m e a n ,
where x and y represent directional coordinates, computed using Equation (7).
{ x ( t ) = R ( t ) sin ( θ ( t ) ) y ( t ) = R ( t ) cos ( θ ( t ) ) { R ( t ) = R 0 ( r t T m a x ) r a n d θ ( t ) = A ( 1 t T m a x ) r a n d { x ( t ) = x ( t ) max | x ( t ) | y ( t ) = y ( t ) max | y ( t ) | ,
where R 0 represents the initial radius value [0.5, 3], A represents the angle gain, the value is [5, 15], r a n d is a random gain [0, 1], and r is a control gain [1, 2].

2.3. Stooping and Swooping Stage

After a high-flying phase and a low-flying phase, the red-tailed hawk locks on to its prey, at which point it needs to hunt. During this phase, the red-tailed hawk will dive at the prey to ensure a kill shot, so this behavior is modeled using Equation (8).
X ( t ) = α ( t ) X b e s t + x ( t ) S t e p S i z e 1 ( t ) + y ( t ) S t e p S i z e 2 ( t )
where S t e p S i z e 1 ( t ) can be calculated according to Equation (9), and S t e p S i z e 2 ( t ) can be calculated according to Equation (10).
S t e p S i z e 1 ( t ) = X ( t ) T F ( t ) X m e a n
S t e p S i z e 2 ( t ) = G ( t ) X ( t ) T F ( t ) X b e s t
where α and G represent the acceleration and gravity factors, respectively, and they can be determined using Equations (11) and (12).
α ( t ) = s i n 2 ( 2.5 t T m a x )
G ( t ) = 2 ( 1 t T m a x )
where α represents the hawk’s acceleration, which grows with increasing t to enhance convergence speed, while G signifies the gravitational effect that weakens as the hawk approaches the prey, thereby reducing exploitation diversity. The pseudocode of the RTH is outlined in Algorithm 1.
Algorithm 1. The pseudo-code of the RTH.
1: Begin
2: Initialize: the relevant parameters.
3: Initialization: random generation within the search space.
4:  While t < T m a x do
5:    High-soaring stage:
6:      Update the population by Equation (1)
7:    Low-soaring stage:
8:      Update the population by Equation (5)
9:    Stooping and Swooping stage:
10:      Update the population by Equation (8)
11:     t = t + 1
12:  End while
13:  return best solution
14: end

3. Proposed IRTH

The original RTH algorithm performs well in single-peaked functions and possesses a simple structure; however, it is prone to problems such as falling into local optimization when encountering complex real-world optimization problems. To overcome these problems, we propose an improved version of the RTH algorithm based on the trust domain. The details are as follows.

3.1. A Stochastic Reverse Learning Strategy Based on Bernoulli Mapping

In the RTH algorithm, the initial population is acquired by random initialization. The random initial population often leads to a more dispersed distribution of individuals in the solution space, which lacks a targeted exploration of the potential optimal solution region. Due to the randomness of the individuals in the random initial population, it is likely that most of the individuals gather in the region near the local optimal solution at the beginning, which makes the search less efficient. Therefore, in this subsection, we propose a stochastic backward learning strategy based on Bernoulli mapping to improve the RTH algorithm, which retains the original randomness while increasing the utilization of prior knowledge to enhance the performance of the algorithm.
Bernoulli transition mapping is a probabilistic transition mechanism that transforms the state of an individual based on the Bernoulli distribution. Mathematically, the Bernoulli distribution is a discrete probability distribution, and in this subsection, we improve it by using a two-part linear mapping, as shown in Equation (13).
X n + 1 = { x n 1 σ ,     0 < x n 1 σ x n ( 1 σ ) σ ,     1 σ < x n 1 ,
where σ is set to 0.4. Reverse learning is a strategy that enhances the search capability of an algorithm by considering the current solution and its inverse. In addition to evaluating and manipulating the regular solutions, the inverse solutions of these solutions are generated, and by comparing the fitness values of the original and the inverse solutions, the better solution is selected to proceed to the next round of iterations. If the fitness of the inverse solution is better than that of the original solution, then the original solution is replaced with the inverse solution, which allows the algorithm to potentially jump out of the local optimum since the inverse solution may be located in a more optimal region outside the current search region. The solution of the reverse solution is shown by Equation (14):
O B L i = K ( M A X + M I N ) X i ,
where O B L i denotes the oppositional solution obtained by particle X i after refractive oppositional–mutual learning; K is a matrix of one row dim columns, where the elements in the matrix are random numbers between 0 and 1; and M A X and M I N are the maximum and minimum values of the individual, respectively. When the fitness value of the opposing solution is better than the original solution, the original solution is updated to the opposing solution; otherwise, no update is performed. This is shown in Equation (15).
X i = { O B L i ,     f ( O B L i ) < f ( X i ) X i ,     o t h e r ,
where f ( O B L i ) denotes the fitness value of the opposing solution and f ( X i ) denotes the fitness value of the original solution. Figure 1 graphically depicts this strategy for better understanding by the reader.

3.2. Dynamic Position Update Optimization Strategy for Stochastic Mean Fusion

In the RTH algorithm, r a n d p e r m is used to randomize the order of individuals in the population, and this randomness helps to introduce a certain amount of diversity, but also introduces uncertainty. Each time the algorithm is run, the difference in the randomized arrangement may lead to a large difference in the convergence path and final result of the algorithm. In addition to this, the fixed calculation of S t e p S i z e cannot be well adapted to different optimization problem scenarios. For some complex objective functions with different scale characteristics, a more flexible step-size adjustment strategy may be needed to ensure that the algorithm can converge globally faster. Therefore, we propose a dynamic position update optimization strategy for stochastic mean fusion, which ensures that the algorithm can better converge to the global optimal solution through two different step-size computation methods, and uses adaptive parameters to optimize the convergence speed of the algorithm in response to the dynamically changing environments and problems. The improved position update is shown in Equation (16).
X ( t ) = { X b e s t + α s t e p 1 . R ,     r a n d > 0.5 X b e s t + α s t e p 2 . R ,     o t h e r
where R is r a n d n ( 1 , d i m ) ; α is an adaptive parameter, which is calculated by Equation (17); s t e p 1 , s t e p 2 are two different dynamically varying step sizes to enable the algorithm to better adapt to different optimization problem scenarios, which are calculated by Equations (18) and (19).
α = ( 1 t T m a x ) 2 t T m a x ,
s t e p 1 = X m e a n 1 X ( t ) ,
s t e p 2 = X m e a n 2 X ( t )
where X m e a n 1 is obtained by taking a random number p between 5 and 10 and then randomly selecting p individuals in the population to form a new subpopulation X p , and then finding the mean of X p as X m e a n 1 . X m e a n 2 is obtained by taking a random number q between 15 and N and then randomly selecting q individuals in the population to form a new subpopulation X q , and then finding the mean of X q as X m e a n 2 . A schematic of the dynamic position update optimization strategy for stochastic mean fusion strategy is given in Figure 2.

3.3. Optimization Method for Frontier Position Update Based on Trust Domain

In the stooping and swooping stage, it is necessary for the algorithm to converge to the optimal solution as quickly as possible while also retaining some of the exploration to prevent the algorithm from falling into a local optimum. In the RTH algorithm, it is relatively complicated to use combinations of T F , G , etc. In single-peak function optimization problems, these complex combinations are too delicate to compute, leading to a waste of computational resources; in complex multi-peak function optimization problems, they may not be able to effectively adapt to the changes in the function terrain, making it difficult to accurately guide the search toward the global optimal solution.
The trust domain method can effectively prevent the algorithm divergence problem caused by too large a step size by approximating the objective function in a trust domain and finding the optimal solution in this region. As the algorithm iterates, the radius of the trust domain is dynamically adjusted according to the change in the fitness value to improve the global convergence performance of the algorithm. In addition, the adaptive adjustment of the radius can also adapt to different scenarios faster, so that the algorithm can perform well in all kinds of functions and problems. In the IRTH algorithm, the trust domain-based optimization method for the frontier position update is determined by Equation (20).
X ( t ) = { X ( t ) + ( X m e a n 1 T D X ( R 1 ) ) . r a n d ,     r a n d > 0.5 X ( t ) + ( X m e a n 2 T D X ( R 1 ) ) . r a n d ,     o t h e r
where R 1 is a random number that denotes a randomly selected individual in the trust domain for the position update of the auxiliary algorithm. T D X denotes the trust domain population, which contains each individual located in the trust domain. In each round of position update, we randomly select an individual in the trust domain for position update, which ensures the convergence speed of the algorithm while retaining the randomness, and well improves the performance of the algorithm.
When the fitness value of our new solution is greater than the fitness value of the original solution, this indicates that there is a problem with the position update, and we are interested in increasing the radius of the trust domain to make the algorithm better to explore. When the fitness value of the new solution is less than the fitness value of the original solution, it means that the position update of the algorithm is effective, then we reduce the radius of the trust domain to make the algorithm converge faster. A schematic diagram of the trust domain-based optimization method for frontal position update is given in Figure 3.
Figure 4 depicts the flowchart of the IRTH algorithm, while its pseudocode is provided in Algorithm 2.
Algorithm 2. The pseudo-code of the IRTH.
1: Begin
2: Initialize: the relevant parameters.
3: Initialization: random generation within the search space.
4:   While t < T m a x do
5:    High-soaring stage:
6:       Update the population by Equation (1)
6:    Dynamic position update optimization strategy for stochastic mean fusion
6:      Update the population by Equation (16)
6:    Optimization Method for Frontier Position Update Based on Trust Domain
8:      Update the population by Equation (20)
13:    t = t + 1
14:  End while
15:  return best solution
16: end

3.4. Computational Time Complexity

The performance of an algorithm is crucial, but it is equally important to assess its time complexity. In many optimization tasks, algorithms must not only deliver high performance but also demonstrate good real-time efficiency. Time complexity refers to how the runtime of an algorithm increases as the size of the input grows. Analyzing the time complexity of an optimization algorithm provides insight into the time overhead when handling large-scale problems. For the RTH algorithm, its time complexity mainly stems from the number of iterations and the stages of high soaring, low soaring, and stooping and swooping, each of which involves updating positions within the population. Therefore, the time complexity of RTH is O ( T dim N ) . In the IRTH algorithm, since it only improves position updates without adding new factors that increase complexity, its time complexity remains O ( T dim N ) .

4. Experimental Results and Detailed Analyses

In this section, we experimentally analyze the proposed IRTH algorithm using the CEC2017 test set. The benchmark function is very important for the performance evaluation of the algorithms, so we first introduce the CEC2017 test set [38]. Next, we present the parameter settings for each comparison algorithm. Then, we qualitatively analyze the IRTH algorithm. In addition, we conduct comparison experiments with 11 other algorithms using the CEC2017 test set. Finally, to fully validate the effectiveness of IRTH, we perform statistical analysis. To ensure the fairness and impartiality of the experiments, all the algorithm populations are set to 50, and the maximum number of iterations is set to 1000.

4.1. Benchmark Test Functions

The CEC2017 test set is widely used to evaluate the performance of optimization algorithms [39,40,41]. It covers a range of benchmark functions with different characteristics, including multimodal, unimodal, and high- and low-dimensional problems. It enables us to comprehensively evaluate the effectiveness of the algorithms on different types of problems.
Among them, two single-peak functions, seven simple multi-peak functions, ten hybrid functions, and ten composite functions are included, which are able to test the performance of the algorithms in different situations, providing a more comprehensive evaluation.

4.2. Competitor Algorithms and Parameter Settings

In this section, we evaluate the performance of the IRTH algorithm by comparing it with 11 advanced algorithms to demonstrate its strong capabilities. We have chosen two classical optimization algorithms, the chameleon swarm algorithm (CSA) and artificial gorilla troops optimizer (GTO); six novel optimization algorithms proposed in the last two years, including the secretary bird optimization algorithm (SBOA), snow ablation optimizer (SAO), rime optimization algorithm (RIME), gold rush optimizer (GRO), red-billed blue magpie optimizer (RBMO), and enterprise development-inspired metaheuristic (ED); and two improved versions of the optimization algorithm, the hyperheuristic whale optimization algorithm (HHWOA), improved grey wolf optimizer (IGWO), and the red-tailed hawk algorithm (RTH). A more comprehensive inclusion of existing optimization algorithms is provided by the selection of three types of algorithms. In the experiments, the superiority of the algorithms proposed in this paper can be comprehensively demonstrated.
In the comparison experiments, we set the parameter values of the algorithm according to reference. Table 2 summarizes the parameter settings of these algorithms for ease of reading and lists the references for all algorithm parameter settings for further review.

4.3. Qualitative Analysis of IRTH

In this subsection, we conduct a qualitative analysis of the proposed IRTH algorithm. Initially, we examine the diversity of the algorithm’s population, which plays a crucial role in exploring the unknown space effectively. Next, we evaluate the balance between exploration and exploitation, as the initial iterations require stronger exploration, while later iterations focus more on exploitation. We validate the performance of IRTH through experiments that measure both exploration and exploitation. Lastly, to assess the effectiveness of the improvements made, we perform ablation experiments. Detailed explanations are provided below.

4.3.1. Analysis of the Population Diversity

In optimization algorithms, population diversity refers to the extent of variation among the individuals within a population [51]. These individuals usually represent possible solutions to the problem. If the diversity of the population is reduced, the algorithm may converge prematurely to a local optimum, limiting its ability to explore the global optimum. Conversely, maintaining a high level of population diversity allows the algorithm to search different regions of the solution space, enhancing the likelihood of finding the global optimum. In this subsection, we assess the population diversity of the IREH algorithm, which is calculated using Equation (21).
I C ( t ) = i = 1 N d = 1 D ( x i d ( t ) c d ( t ) ) 2 ,
where I C ( t ) denotes the population diversity, N represents the population size, D indicates the problem’s dimensionality, and x i d ( t )   denotes the value of the i individual in the d dimension at the t iteration. c d ( t ) reflects the degree of dispersion of the entire population relative to the center of mass at the t iteration. c d ( t )   is calculated through Equation (22).
c d ( t ) = 1 D i = 1 N x i d ( t ) .
Figure 5 shows the experimental results of population diversity analysis for both algorithms, from which it can be seen that the population diversity of the IRTH algorithm is due to the RTH algorithm in most cases. Compared to the RTH algorithm, whose population diversity mostly decreases to a very low value within 100 generations, the IRTH algorithm’s population diversity decreases slower and is able to maintain the diversity of the population well.

4.3.2. Analysis of the Exploration and Exploitation

In optimization algorithms, both exploration and exploitation play crucial roles. Exploration involves the algorithm performing a broad search of the solution space, aiming to uncover diverse regions, including unknown areas that may contain globally optimal solutions. Exploitation, on the other hand, focuses on conducting a localized search around the best solutions discovered, refining them further. It leverages existing knowledge to delve deeper into regions deemed promising. If the algorithm over-explores, it may waste time searching the entire solution space aimlessly, missing opportunities to find better solutions in specific areas. Conversely, excessive exploitation can cause the algorithm to prematurely converge to a local optimum, preventing the discovery of potentially better solutions elsewhere in the solution space [52]. Thus, balancing exploration and exploitation is crucial for achieving optimal performance in the algorithm. In this subsection, we examine the exploration and exploitation aspects of the IRTH algorithm. Equations (23) and (24) calculate the percentage of exploration and exploitation.
E x p l o r a t i o n ( % ) = D i v ( t ) D i v m a x × 100 % ,
E x p l o i t a t i o n ( % ) = | D i v ( t ) D i v m a x | D i v m a x × 100 % ,
where D i v ( t ) denotes the measure of diversity at the t th iteration, which is calculated by Equation (25), and D i v m a x denotes the maximum measure of diversity throughout the iteration.
D i v ( t ) = 1 D d = 1 D 1 N i = 1 N m e d i a n ( x d ( t ) ) x i d ( t ) .
The experimental results are presented in Figure 6. During the early stages of algorithm iteration, the proportion of exploration is significantly higher than that of exploitation. However, as the algorithm progresses, the proportion of exploitation gradually increases while the proportion of exploration decreases. By the end of the iteration, the proportion of exploitation approaches 100%, indicating that IRTH effectively balances exploration and exploitation, demonstrating strong performance in both aspects.

4.3.3. Impact Analysis of the Modification

To assess the effectiveness of the strategy introduced in this paper, in this subsection, we conduct ablation experiments. When new strategies, operations, or parameters are introduced into an existing optimization algorithm, ablation experiments can be used to verify whether these additions are really effective. It enables us to objectively evaluate the value of new strategies. In this section, the algorithm with the addition of a stochastic reverse learning strategy based on a Bernoulli mapping strategy is named RTH1, the algorithm with the addition of a dynamic position update optimization strategy for stochastic mean fusion is named RTH2, and the algorithm that combines the three strategies with RTH is named IRTH. The experimental results are shown in Figure 7.
As can be seen from the figure, although IRTH does not obtain the fastest convergence speed for some functions, its convergence accuracy is the highest. In particular, on functions such as F6 and F9, IRTH improves its convergence accuracy along with its convergence speed, gaining a significant victory compared to the RTH algorithm. In addition to this, it is also evident from the experimental results that all three improvement strategies we proposed have good results against the RTH algorithm, with RTH1 outperforming RTH, RTH2 outperforming RTH1, and IRTH obtaining the best results in most of the functions. It can be proved that all three strategies we proposed are effective, and better results can be obtained by combining them.

4.4. Comparison Using CEC 2017 Test Functions

In this subsection, we validate the effectiveness of the algorithm by conducting experiments using three dimensions from the CEC2017 test set. We compare the proposed IRTH algorithm with 11 other state-of-the-art algorithms. These comparison experiments clearly highlight the strengths and weaknesses of IRTH, with the experimental numerical results presented in Table 3, Table 4 and Table 5. To visualize the convergence speed of the algorithms during the optimization process, the convergence graphs of all 12 algorithms are displayed in Figure 8. To minimize the impact of randomness and further assess the stability of the algorithms, the boxplot diagrams for all the algorithms are shown in Figure 9.
As shown in Figure 8, in the 30-dimensional case, although the convergence accuracy of IRTH is similar to that of SAO for the F7 function, the IRTH algorithm improves convergence speed by about 200 iterations. Similarly, for the F13 and F24 functions, the convergence accuracy of IRTH is comparable to that of some other comparative algorithms, but its convergence speed is much faster. Furthermore, for functions such as F5, F8, and F16, the convergence accuracies of IRTH are significantly superior to those of other comparative algorithms. In the 50-dimensional case, on the F6, F9, F15, and F30 functions, the algorithm’s convergence speed is greatly improved, and it is able to escape local optima towards the end, finding better solutions on both the F7 and F30 functions. Especially when compared to the RTH algorithm, IRTH shows significant improvements in both convergence speed and accuracy. Most importantly, the IRTH algorithm performs exceptionally well in higher dimensions, and in the 100-dimensional case, IRTH significantly outperforms other comparative algorithms in both convergence accuracy and speed.
From Figure 9, it can be seen that IRTH has good stability whether it is in 30, 50, or 100 dimensions. The mean, median, maximum, and minimum values of the 30 runs outperform the other compared algorithms to a great extent, obtaining competitive results.

4.5. Statistical Analysis

Statistical analysis is essential for optimizing algorithms, allowing researchers to assess and compare the effectiveness of different algorithms, which helps in choosing the most appropriate one for a given research problem. In this section, we apply the Wilcoxon rank sum test and the Friedman mean rank test to evaluate the performance of the IRTH algorithm, as detailed below.

4.5.1. Wilcoxon Rank Sum Test

In this subsection, we employ the Wilcoxon rank sum test [53] to evaluate the IRTH algorithm to identify significant differences between it and other algorithms, without the assumption of a normal distribution. Unlike the traditional t-test, the Wilcoxon rank sum test is more flexible as it does not require normally distributed data, making it particularly useful for datasets with outliers or non-normal distributions. The Wilcoxon rank sum test statistic W is calculated by Equation (26).
W = i = 1 n 1 R ( X i ) ,
where R ( X i ) denotes the rank of X i among all observations. The test statistic U is calculated by Equation (27).
U = W n 1 ( n 1 + 1 ) 2 .
For larger sample sizes, U is approximately normally distributed by Equations (28) and (29).
μ U = n 1 n 2 2 ,
σ U = n 1 n 2 ( n 1 + n 2 + 1 ) 12 ,
and the standardized statistic Z is calculated by Equation (30).
Z = U μ U σ U .
We set the significance level at 0.05, assessing whether the results from each IRTH run significantly differ from those of other algorithms at this level. The null hypothesis ( H 0 ) assumes no significant difference between the two algorithms. If p < 0.05 , we reject the null hypothesis, indicating a significant difference; otherwise, we accept the null hypothesis, suggesting no significant difference. The experimental results are presented in Table 6, where IRTH shows significant advantages. “+” means IRTH is superior to the comparison algorithm in the Wilcoxon rank sum test, “=” means the two algorithms are not much different, and “−” means IRTH is inferior to the comparison algorithm in the Wilcoxon rank sum test.

4.5.2. Friedman Mean Rank Test

In this subsection, we apply the Friedman mean rank test [54] to assess the ranking of IRTH. This nonparametric method is commonly used to compare the median differences across three or more related samples. The friedman mean rank test is especially in repeated measures designs or paired samples and is often preferred over ANOVA when the data do not follow a normal distribution.
The Friedman nonparametric criterion statistic is defined by Equation (31).
Q = 12 n k ( k + 1 ) j = 1 k R j 2 3 n ( k + 1 ) ,
where n is the number of blocks, k is the number of groups, and R j is the rank sum for j -th group. When n and k are large, Q approximately follows a χ 2 distribution with k 1 degrees of freedom.
The experimental results are provided in Table 7, with the ranking distribution depicted in Figure 10. M . R represents the average ranking of the algorithm across 30 functions, and T . R denotes the final overall ranking. From the experimental results, it is evident that the IRTH algorithm performs excellently, achieving the top overall ranking in 30-, 50-, and 100-dimensional cases.

4.6. Sensitivity Analysis of Parameters

In this section, we present an analysis of the parameter σ . Experiments were conducted using 30, 50, and 100 dimensions of the CEC2017 test set, with experimental parameters consistent with those used previously. We calculated the average ranking of different values of σ for each dimension and illustrated it as a curve in Figure 11. From the experiment results, it is evident that the optimal overall effect occurs when σ = 0.4 . Therefore, in this paper, we set s to 0.4.

5. IRTH Algorithm for Real-Environment UAV Path Planning

In this section, in order to verify the effectiveness of the IRTH algorithm in planning the path of UAVs in real environments, we use the IRTH algorithm to solve the problem of planning the path of a UAV in a real environment. Firstly, we model the real environment, and secondly, we use the algorithm to solve the problem. The specific details are as follows.

5.1. Scenarios and Objective Functions

In this subsection, we present the scenarios and objective functions for the UAV flight environment.

5.1.1. Scenario Setting

In this section, the scenarios we use to evaluate the performance of the algorithms are derived from real digital elevation model maps from LiDAR sensors. Two regions with different terrain structures on Christmas Island, Australia, were selected and then augmented to generate four baseline scenes, as shown in Figure 12. The red cylinders indicate the number and location of threats.

5.1.2. Optimization Problem Definition

In this subsection, we define the UAV path-planning problem as a cost function which includes path length, safety and feasibility constraints, etc., as detailed below.
  • Path Length Costs
For UAV path planning, the shortest path is a very important metric, but in most real-world problems, there are often many obstacles in the straight line from the start point to the end point, so we need to perform obstacle-avoidance path planning for UAVs. In this subsection, we assume that the flight waypoint of the UAV is P i j = ( x i j , y i j , z i j ) . The Euclidean distance between two waypoints is P i j P i , j + 1 . Therefore, the flight cost of the UAV is given by Equation (32).
F 1 ( X i ) = j = 1 n 1 P i , j P i , j + 1 ,
2.
Threat Costs
In the UAV path-planning problem, threat cost is one of the important factors affecting the decision. In various complex environments, UAVs usually face a variety of potential threats, and the introduction of threat cost can enable planning algorithms to avoid high-threat areas during path selection, reduce the risk of being attacked or damaged, and improve the survivability of UAV missions. Therefore, the threat cost of the UAV is calculated by Equation (33).
F 2 ( X i ) = j = 1 n 1 k = 1 K T k ( P i j P i , j + 1 ) ,
where T k ( p i j p i , j + 1 ) denotes flight constraint costs, which are calculated by Equation (34).
T k ( p i j p i , j + 1 ) = { 0 ,     d k > S + D + R k ( S + D + R k ) d k ,     D + R k < d k S + D + R k ,     d k D + R k ,
where R k denotes the radius of the K th cylindrical obstacle, D denotes the peripheral collision region, and d k denotes the distance from the center of the obstacle to the path L p i j p i + 1 , j .
3.
High Costs
In UAV path planning, altitude cost is an important factor affecting path selection. UAV flight altitude can directly affect the strength and coverage of communication signals. Lower altitudes may be interfered with by terrain or obstacles, affecting communication stability and mission control effectiveness. Introducing altitude cost can enable the planning algorithm to better select the flight altitude for communication stability and ensure the reliability of mission data transmission. The altitude cost of the UAV is calculated by Equation (35).
F 3 ( X i ) = j = 1 n H i j ,
where H i j denotes the cost of the height of the X i location, which is calculated by Equation (36).
H i j = { | h i j ( h m a x + h m i n ) 2 | ,     h m i n h i j h m a x ,     otherwise ,
where h i j is the altitude at which the UAV is located; h m i n is the minimum altitude at which flight is allowed; and h m a x is the maximum altitude at which flight is allowed.
4.
Smoothness Costs
In UAV path planning, the smoothness cost is an important metric used to measure the smoothness or continuity of path turns. UAVs are subject to inertial and dynamical constraints in flight, with limited changes in turning radius or speed. The smoothness cost ensures the enforceability of path planning by avoiding sharp turns that do not meet the physical constraints of the UAV at the path planning stage. The smoothing cost for UAV flight is calculated by Equation (37).
F 4 ( X i ) = a 1 j = 1 n 2 α i j + a 2 j = 1 n 1 | β i j β i , j 1 | ,
where a 1 denotes the UAV horizontal turn angle constraint penalty coefficient, and a 2 denotes the UAV vertical pitch angle constraint penalty coefficient. α i j denotes the horizontal turn angle constraint, which is computed by Equation (38). β i j denotes the vertical pitch angle, which is computed by Equation (39).
α i j = arctan L p i j p i , j + 1 × L p i j p i , j + 2 L p i j p i , j + 1 · L p i j p i , j + 2 ,
β i j = arctan Z i , j + 1 Z i j L p i j p i , j + 1 ,
where L p i j p i , j + 1 is their projections on the plane, which is calculated by Equation (40).
L p i j p i , j + 1 = k × ( L p i j p i , j + 1 × k ) ,
where k is the unit vector in the positive direction of the axis.
5.
Overall Objective Function (OEF)
Considering the path length cost, threat cost, high cost, and smoothness cost, the overall objective function based on multi-cost is calculated by Equation (41).
F ( X i ) = k = 1 4 b k F k ( X i ) .
where b k is the weight coefficient.
6.
Problem Formulation
Based on the four aforementioned cost and overall objective functions, the goal of this system is to minimize the cost of flying the UAV. Therefore, the optimization problem is formulated by Equation (42).
P : min F ( X i ) = k = 1 4 b k F k ( X i ) .
s . t . k = 1 4 b k .
In our experiments, in the UAV path-planning problem, the path cost is the single most important factor that best characterizes the quality of the planned path. Next is the threat cost; only good avoidance of threatening factors can make our UAVs perform their tasks well. So, we define the weight coefficients as 0.4, 0.3, 0.2, and 0.1.

5.1.3. Analysis of Experimental Results

In order to verify the performance of IRTH, we conducted experiments on it in four different scenarios. The specific details are as follows.
Scenario 1: In this subsection, we perform an experimental analysis to verify the performance of the algorithm in the case of Scenario 1. We set the starting point of the UAV to [100,100,150], the end point to [800,800,150], and the waypoints to 10 for experimental analysis. Its path cost is shown in Table 8, where mean, median, max, and min denote the mean, median, maximum, and minimum values obtained from 30 independent runs of the algorithm, respectively. The path-planning schematic for each algorithm in Scenario 1 is shown in Figure 13.
From the experimental results, it can be seen that the IRTH algorithm obtains good performance in Scenario 1. The total cost of IRTH is 414.26, which is the lowest among the 12 algorithms in terms of the mean value of 30 runs. In terms of stability, the difference between the minimum and maximum values of the IRTH algorithm is 5.59, which is also the most stable and can be good for path planning for the UAV in Scenario 1.
Scenario 2: In this subsection, we perform an experimental analysis to verify the performance of the algorithm in the case of Scenario 2. We set the starting point of the UAV to [100,100,150], the end point to [800,800,150], and the waypoints to 10 for experimental analysis. Its path cost is shown in Table 9. The path-planning schematic for each algorithm in Scenario 2 is shown in Figure 14.
As can be seen from the experimental results, the IRTH still obtains competitive results in Scenario 2. Although the SBOA, SAO, ED, and HHWOA algorithms are more stable compared to TRTH, IRTH obtains the best results in terms of mean value. The mean value has a cost improvement of 30 over IGWO, proving its superior performance.
Scenario 3: In this subsection, we perform an experimental analysis to verify the performance of the algorithm in the case of Scenario 3. We set the starting point of the UAV to [100,100,150], the end point to [800,800,150], and the waypoints to 10 for experimental analysis. Its path cost is shown in Table 10. The path-planning schematic for each algorithm in Scenario 3 is shown in Figure 15.
From the experimental results, it can be seen that IRTH still obtains competitive results in Scenario 3. Except for the IRTH algorithm, the cost of the paths planned by the rest of the algorithms is greater than 430, while the IRTH algorithm plans a path with a cost of 426.47. Most importantly, the RBMO and IGWO algorithms are not able to plan paths for the UAV in some cases, which proves the wide applicability of the IRTH.
Scenario 4: In this subsection, we perform an experimental analysis to verify the performance of the algorithm in the case of Scenario 4. We set the starting point of the UAV to [100,100,150], the end point to [800,800,150], and the waypoints to 10 for experimental analysis. Its path cost is shown in Table 11. The path-planning schematic for each algorithm in Scenario 4 is shown in Figure 16.
From the experimental results, it can be seen that IRTH still obtains competitive results in Scenario 4. From Scenario 1 and Scenario 2, it seems that all algorithms are able to perform path planning for the UAV, but as the problem becomes more complex and the number of obstacles increases, the two algorithms in Scenario 3 and four algorithms in Scenario 4 are not able to perform path planning. However, IRTH is still able to perform path planning for the UAV and obtains the best results by planning the path with the lowest cost among the 12 compared algorithms.

6. Conclusions

In this paper, an improved RTH algorithm based on a trust domain is proposed for the UAV path-planning problem in real environments. Firstly, we adopt the Bernoulli mapping-based backward learning strategy, a dynamic position update optimization strategy for stochastic mean fusion, and an optimization method for frontier position update based on the trust domain, three strategies to improve the performance of the algorithm in terms of population diversity, convergence speed, and convergence accuracy, so that the algorithm can solve the problem better. In addition, to verify the effectiveness of the algorithms, the performance of the algorithms is comprehensively evaluated using the CEC2017 test set. Finally, the algorithm is applied to the UAV path-planning problem in a real environment to perform path planning for UAVs.
In the future, based on the excellent performance of the IRTH algorithm, it is planned to apply it to other practical engineering problems to solve problems that need to be solved in other fields—for example, cloud resource scheduling problems, neural network feature selection problems, and so on.

Author Contributions

Conceptualization, M.W. and P.Y.; methodology, M.W. and P.Y.; software, M.W. and P.Y.; validation, M.W. and P.Y.; formal analysis, M.W. and P.H.; investigation, M.W. and P.H.; resources, M.W. and Z.Y.; data curation, M.W. and Z.Y.; writing—original draft preparation, M.W. and S.K.; writing—review and editing, M.W. and S.K.; visualization, M.W. and L.H.; supervision, M.W. and L.H.; funding acquisition, M.W. and P.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Natural Science Research Project of the Guizhou Provincial Department of Education (QianJiangJi [2024]297), the Guizhou Science and Technology Support Program Project (QianKeHe ZhiCheng [2024] General No. 070), and the School-level Project of Guizhou Equipment Manufacturing Vocational College (ZBKY2024-14).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yaacoub, J.; Noura, H.; Salman, O.; Chehab, A. Security Analysis of Drones Systems: Attacks, Limitations, and Recommendations. Internet Things 2020, 11, 100218. [Google Scholar] [CrossRef] [PubMed]
  2. Ding, W.; Yang, H.; Yu, K.; Shu, J. Crack Detection and Quantification for Concrete Structures Using UAV and Transformer. Autom. Constr. 2023, 152, 104929. [Google Scholar] [CrossRef]
  3. Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A Compilation of UAV Applications for Precision Agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar] [CrossRef]
  4. Tang, P.; Li, J.; Sun, H. A Review of Electric UAV Visual Detection and Navigation Technologies for Emergency Rescue Missions. Sustainability 2024, 16, 2105. [Google Scholar] [CrossRef]
  5. Vacca, G.; Vecchi, E. UAV Photogrammetric Surveys for Tree Height Estimation. Drones 2024, 8, 106. [Google Scholar] [CrossRef]
  6. Wu, X.; Li, W.; Hong, D.; Tao, R.; Du, Q. Deep Learning for Unmanned Aerial Vehicle-Based Object Detection and Tracking: A Survey. Ieee Geosci. Remote Sens. Mag. 2022, 10, 91–124. [Google Scholar] [CrossRef]
  7. Liu, X.; Li, G.; Yang, H.; Zhang, N.; Wang, L.; Shao, P. Agricultural UAV Trajectory Planning by Incorporating Multi-Mechanism Improved Grey Wolf Optimization Algorithm. Expert Syst. Appl. 2023, 233, 120946. [Google Scholar] [CrossRef]
  8. Xu, Q.; Su, Z.; Fang, D.; Wu, Y. BASIC: Distributed Task Assignment With Auction Incentive in UAV-Enabled Crowdsensing System. IEEE Trans. Veh. Technol. 2024, 73, 2416–2430. [Google Scholar] [CrossRef]
  9. Bai, Z.; Zhou, H.; Shi, J.; Xing, L.; Wang, J. A Hybrid Multi-Objective Evolutionary Algorithm with High Solving Efficiency for UAV Defense Programming. Swarm Evol. Comput. 2024, 87, 101572. [Google Scholar] [CrossRef]
  10. Murota, K.; Shioura, A. Dijkstra’s Algorithm and L-Concave Function Maximization. Math. Program. 2014, 145, 163–177. [Google Scholar] [CrossRef]
  11. Saian, P.O.N.; Suyoto; Pranowo. Optimized A-Star Algorithm in Hexagon-Based Environment Using Parallel Bidirectional Search. In Proceedings of the 2016 8th International Conference on Information Technology and Electrical Engineering (ICITEE), Yogyakarta, Indonesia, 5–6 October 2016. [Google Scholar]
  12. Pesic, D.; Selmic, M.; Macura, D.; Rosic, M. Finding Optimal Route by Two-Criterion Fuzzy Floyd’s Algorithm-Case Study Serbia. Oper. Res. 2020, 20, 119–138. [Google Scholar] [CrossRef]
  13. de las Casas, P.; Kraus, L.; Sedeño-Noda, A.; Borndörfer, R. Targeted Multiobjective Dijkstra Algorithm. Networks 2023, 82, 277–298. [Google Scholar] [CrossRef]
  14. Laskaris, R. Artificial Intelligence: A Modern Approach, 3rd Edition. Libr. J. 2015, 140, 45. [Google Scholar]
  15. Ji, J.; Wu, T.; Yang, C. Neural Population Dynamics Optimization Algorithm: A Novel Brain-Inspired Meta-Heuristic Method. Knowl.-Based Syst. 2024, 300, 112194. [Google Scholar] [CrossRef]
  16. Truong, D.-N.; Chou, J.-S. Metaheuristic Algorithm Inspired by Enterprise Development for Global Optimization and Structural Engineering Problems with Frequency Constraints. Eng. Struct. 2024, 318, 118679. [Google Scholar] [CrossRef]
  17. Yuan, C.; Zhao, D.; Heidari, A.A.; Liu, L.; Chen, Y.; Chen, H. Polar Lights Optimizer: Algorithm and Applications in Image Segmentation and Feature Selection. Neurocomputing 2024, 607, 128427. [Google Scholar] [CrossRef]
  18. Alhijawi, B.; Awajan, A. Genetic Algorithms: Theory, Genetic Operators, Solutions, and Applications. Evol. Intell. 2024, 17, 1245–1256. [Google Scholar] [CrossRef]
  19. Gad, A.G. Particle Swarm Optimization Algorithm and Its Applications: A Systematic Review. Arch Comput. Methods Eng 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
  20. López-Ibáñez, M.; Stützle, T.; Dorigo, M. Ant Colony Optimization: A Component-Wise Overview. In Handbook of Heuristics; Martí, R., Pardalos, P.M., Resende, M.G.C., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 371–407. ISBN 978-3-319-07124-4. [Google Scholar]
  21. Hashim, F.; Hussain, K.; Houssein, E.; Mabrouk, M.; Al-Atabany, W. Archimedes Optimization Algorithm: A New Metaheuristic Algorithm for Solving Optimization Problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  22. Gao, H.; Zhang, Q. Alpha Evolution: An Efficient Evolutionary Algorithm with Evolution Path Adaptation and Matrix Generation. Eng. Appl. Artif. Intell. 2024, 137, 109202. [Google Scholar] [CrossRef]
  23. Luan, T.M.; Khatir, S.; Tran, M.T.; De Baets, B.; Cuong-Le, T. Exponential-Trigonometric Optimization Algorithm for Solving Complicated Engineering Problems. Comput. Methods Appl. Mech. Eng. 2024, 432, 117411. [Google Scholar] [CrossRef]
  24. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger Games Search: Visions, Conception, Implementation, Deep Analysis, Perspectives, and towards Performance Shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  25. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An Efficient Optimization Algorithm Based on Weighted Mean of Vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  26. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H. Parrot Optimizer: Algorithm and Applications to Medical Problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef]
  27. Wu, X.; Li, S.; Jiang, X.; Zhou, Y. Information Acquisition Optimizer: A New Efficient Algorithm for Solving Numerical and Constrained Engineering Optimization Problems. J. Supercomput. 2024, 80, 25736–25791. [Google Scholar] [CrossRef]
  28. Cheng, J.; De Waele, W. Weighted Average Algorithm: A Novel Meta-Heuristic Optimization Algorithm Based on the Weighted Average Position Concept. Knowl.-Based Syst. 2024, 305, 112564. [Google Scholar] [CrossRef]
  29. Fakhouri, H.N.; Awaysheh, F.M.; Alawadi, S.; Alkhalaileh, M.; Hamad, F. Four Vector Intelligent Metaheuristic for Data Optimization. Computing 2024, 106, 2321–2359. [Google Scholar] [CrossRef]
  30. Ghasemi, M.; Golalipour, K.; Zare, M.; Mirjalili, S.; Trojovský, P.; Abualigah, L.; Hemmati, R. Flood Algorithm (FLA): An Efficient Inspired Meta-Heuristic for Engineering Optimization. J. Supercomput. 2024, 80, 22913–23017. [Google Scholar] [CrossRef]
  31. Falahah, I.A.; Al-Baik, O.; Alomari, S.; Bektemyssova, G.; Gochhait, S.; Leonova, I.; Malik, O.P.; Werner, F.; Dehghani, M. Frilled Lizard Optimization: A Novel Bio-Inspired Optimizer for Solving Engineering Applications. Comput. Mater. Contin. 2024, 79, 3631–3678. [Google Scholar] [CrossRef]
  32. Wang, W.; Tian, W.; Xu, D.; Zang, H. Arctic Puffin Optimization: A Bio-Inspired Metaheuristic Algorithm for Solving Engineering Design Optimization. Adv. Eng. Softw. 2024, 195, 103694. [Google Scholar] [CrossRef]
  33. Mohammadzadeh, A.; Mirjalili, S. Eel and Grouper Optimizer: A Nature-Inspired Optimization Algorithm. Clust. Comput. 2024, 27, 12745–12786. [Google Scholar] [CrossRef]
  34. Bouaouda, A.; Hashim, F.A.; Sayouti, Y.; Hussien, A.G. Pied Kingfisher Optimizer: A New Bio-Inspired Algorithm for Solving Numerical Optimization and Industrial Engineering Problems. Neural Comput. Appl. 2024, 36, 15455–15513. [Google Scholar] [CrossRef]
  35. Ferahtia, S.; Houari, A.; Rezk, H.; Djerioui, A.; Machmoum, M.; Motahhir, S.; Ait-Ahmed, M. Red-Tailed Hawk Algorithm for Numerical Optimization and Real-World Problems. Sci. Rep. 2023, 13, 12950. [Google Scholar] [CrossRef]
  36. Almousa, M.T.; Gomaa, M.R.; Ghasemi, M.; Louzazni, M. Single-Sensor Global MPPT for PV System Interconnected with DC Link Using Recent Red-Tailed Hawk Algorithm. Energies 2024, 17, 3391. [Google Scholar] [CrossRef]
  37. Qin, X.; Li, S.; Tong, J.; Xie, C.; Zhang, X.; Wu, F.; Xie, Q.; Ling, Y.; Lin, G. ERTH Scheduler: Enhanced Red-Tailed Hawk Algorithm for Multi-Cost Optimization in Cloud Task Scheduling. Artif. Intell. Rev. 2024, 57, 328. [Google Scholar] [CrossRef]
  38. Awad, N.H.; Ali, M.Z.; Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  39. Yuan, Y.; Gao, W.; Huang, L.; Li, H.; Xie, J. A Two-Phase Constraint-Handling Technique for Constrained Optimization. IEEE Trans. Syst. Man Cybern.-Syst. 2023, 53, 6194–6203. [Google Scholar] [CrossRef]
  40. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Nutcracker Optimizer: A Novel Nature-Inspired Metaheuristic Algorithm for Global Optimization and Engineering Design Problems. Knowl.-Based Syst. 2023, 262, 110248. [Google Scholar] [CrossRef]
  41. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovsky, P. Coati Optimization Algorithm: A New Bio-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  42. Braik, M. Chameleon Swarm Algorithm: A Bio-Inspired Optimizer for Solving Engineering Design Problems. Expert Syst. Appl. 2021, 174, 114685. [Google Scholar] [CrossRef]
  43. Abdollahzadeh, B.; Gharehchopogh, F.; Mirjalili, S. Artificial Gorilla Troops Optimizer: A New Nature-Inspired Metaheuristic Algorithm for Global Optimization Problems. Int. J. Intell. Syst. 2021, 36, 5887–5958. [Google Scholar] [CrossRef]
  44. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary Bird Optimization Algorithm: A New Metaheuristic for Solving Global Optimization Problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  45. Deng, L.; Liu, S. Snow Ablation Optimizer: A Novel Metaheuristic Technique for Numerical Optimization and Engineering Design. Expert Syst. Appl. 2023, 225, 120069. [Google Scholar] [CrossRef]
  46. Abdel-Salam, M.; Hu, G.; Celik, E.; Gharehchopogh, F.S.; El-Hasnony, I.M. Chaotic RIME Optimization Algorithm with Adaptive Mutualism for Feature Selection Problems. Comput. Biol. Med. 2024, 179, 108803. [Google Scholar] [CrossRef]
  47. Zolfi, K. Gold Rush Optimizer: A New Population-Based Metaheuristic Algorithm. Oper. Res. Decis. 2023, 33. [Google Scholar] [CrossRef]
  48. Fu, S.; Li, K.; Huang, H.; Ma, C.; Fan, Q.; Zhu, Y. Red-Billed Blue Magpie Optimizer: A Novel Metaheuristic Algorithm for 2D/3D UAV Path Planning and Engineering Design Problems. Artif. Intell. Rev. 2024, 57, 134. [Google Scholar] [CrossRef]
  49. Su, Y.; Dai, Y.; Liu, Y. A Hybrid Hyper-Heuristic Whale Optimization Algorithm for Reusable Launch Vehicle Reentry Trajectory Optimization. Aerosp. Sci. Technol. 2021, 119, 107200. [Google Scholar] [CrossRef]
  50. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An Improved Grey Wolf Optimizer for Solving Engineering Problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  51. Ito, H.; Ogden, R.; Langenhorst, T.; Inoue-Murayama, M. Contrasting Results From Molecular and Pedigree-Based Population Diversity Measures in Captive Zebra Highlight Challenges Facing Genetic Management of Zoo Populations. Zoo Biol. 2017, 36, 87–94. [Google Scholar] [CrossRef]
  52. Luo, C.; Kumar, S.; Mallick, D.; Luo, B. Impacts of Exploration and Exploitation on Firms Performance and the Moderating Effects of Slack: A Panel Data Analysis. IEEE Trans. Eng. Manag. 2019, 66, 613–620. [Google Scholar] [CrossRef]
  53. Rosner, B.; Glynn, R. Power and Sample Size Estimation for the Wilcoxon Rank Sum Test with Application to Comparisons of C Statistics from Alternative Prediction Models. Biometrics 2009, 65, 188–197. [Google Scholar] [CrossRef]
  54. Rayner, J.; Livingston, G. Relating the Friedman Test Adjusted for Ties, the Cochran-Mantel-Haenszel Mean Score Test and the ANOVA F Test. Commun. Stat.-Theory Methods 2023, 52, 4369–4378. [Google Scholar] [CrossRef]
Figure 1. Graphical description of the stochastic reverse learning strategy.
Figure 1. Graphical description of the stochastic reverse learning strategy.
Biomimetics 10 00031 g001
Figure 2. Dynamic position update optimization strategy diagram.
Figure 2. Dynamic position update optimization strategy diagram.
Biomimetics 10 00031 g002
Figure 3. Optimization method for frontier position update based on trust domain diagram.
Figure 3. Optimization method for frontier position update based on trust domain diagram.
Biomimetics 10 00031 g003
Figure 4. Flowchart of IRTH algorithm.
Figure 4. Flowchart of IRTH algorithm.
Biomimetics 10 00031 g004
Figure 5. The analysis of the population diversity of IRTH and RTH.
Figure 5. The analysis of the population diversity of IRTH and RTH.
Biomimetics 10 00031 g005aBiomimetics 10 00031 g005bBiomimetics 10 00031 g005c
Figure 6. The analysis of the exploration and exploitation of IRTH.
Figure 6. The analysis of the exploration and exploitation of IRTH.
Biomimetics 10 00031 g006aBiomimetics 10 00031 g006bBiomimetics 10 00031 g006c
Figure 7. Comparison of different improvement strategies.
Figure 7. Comparison of different improvement strategies.
Biomimetics 10 00031 g007aBiomimetics 10 00031 g007bBiomimetics 10 00031 g007c
Figure 8. Comparison of convergence speed of different algorithms on CEC2017 test set.
Figure 8. Comparison of convergence speed of different algorithms on CEC2017 test set.
Biomimetics 10 00031 g008aBiomimetics 10 00031 g008bBiomimetics 10 00031 g008cBiomimetics 10 00031 g008d
Figure 9. Boxplot analysis for different algorithms on the CEC2017 test set.
Figure 9. Boxplot analysis for different algorithms on the CEC2017 test set.
Biomimetics 10 00031 g009aBiomimetics 10 00031 g009bBiomimetics 10 00031 g009cBiomimetics 10 00031 g009d
Figure 10. The ranking Sankey of different algorithms on CEC2017.
Figure 10. The ranking Sankey of different algorithms on CEC2017.
Biomimetics 10 00031 g010
Figure 11. Parametric sensitivity analysis average ranking graph.
Figure 11. Parametric sensitivity analysis average ranking graph.
Biomimetics 10 00031 g011
Figure 12. Four different scenario views.
Figure 12. Four different scenario views.
Biomimetics 10 00031 g012aBiomimetics 10 00031 g012b
Figure 13. Scenario 1: Schematic diagram of path planning.
Figure 13. Scenario 1: Schematic diagram of path planning.
Biomimetics 10 00031 g013aBiomimetics 10 00031 g013b
Figure 14. Scenario 2: Schematic diagram of path planning.
Figure 14. Scenario 2: Schematic diagram of path planning.
Biomimetics 10 00031 g014aBiomimetics 10 00031 g014b
Figure 15. Scenario 3: Schematic diagram of path planning.
Figure 15. Scenario 3: Schematic diagram of path planning.
Biomimetics 10 00031 g015aBiomimetics 10 00031 g015b
Figure 16. Scenario 4: Schematic diagram of path planning.
Figure 16. Scenario 4: Schematic diagram of path planning.
Biomimetics 10 00031 g016aBiomimetics 10 00031 g016bBiomimetics 10 00031 g016c
Table 1. Summary of the latest optimization algorithms.
Table 1. Summary of the latest optimization algorithms.
AlgorithmInspiredClassificationReference
Alpha evolution (AE)The alpha operator combines an adaptive base vector with step sizes that are both random and adaptive.Evolutionary[22]
Exponential-Trigonometric Optimization (ETO)An intricate blend of exponential and trigonometric functions.Mathematical-based[23]
Hunger Games Search (HGS)The hunger-induced actions and behavioral decisions of animals.Swarm-based[24]
weIghted meaN oF vectOrs (INFO)The weighted mean idea.Mathematical-based[25]
Parrot Optimizer (PO)Notable behaviors exhibited by trained Pyrrhura molinae parrots.Swarm-based[26]
Polar Lights Optimization (PLO)Based on the Northern Lights, this paper introduces Polar Lights Optimization (PLO) for solving optimization problems.Physics-based[17]
Information acquisition optimizer (IAO)Human information acquisition behaviors involve three key strategies.Swarm-based[27]
Weighted average algorithm (WAA)The weighted average position for the whole population.Mathematical-based[28]
The Neural Population Dynamics Optimization Algorithm (NPDOA)Brain neuroscience.Swarm-based[15]
Four-Vector Intelligent Metaheuristic (FVIM)Four top-performing leaders within a swarm.Swarm-based[29]
Flood algorithm (FLA)The complex dynamics and flow patterns of water masses during river basin floods.Swarm-based[30]
Frilled Lizard Optimization (FLO)The distinctive hunting strategies of frilled lizards in their native environment.Swarm-based[31]
Arctic puffin optimization (APO)The flight patterns and underwater foraging habits of Arctic puffins.Swarm-based[32]
Eel and Grouper Optimizer (EGO)The cooperative interaction and foraging tactics of eels and groupers in marine ecosystems.Swarm-based[33]
Pied kingfisher optimizer (PKO)The unique hunting strategies and symbiotic relationships exhibited by pied kingfishers in their natural environment.Swarm-based[34]
Table 2. Parameter settings of the comparison algorithms.
Table 2. Parameter settings of the comparison algorithms.
AlgorithmsParameter NameParameter ValueReference
CSA v , r h o , g a m m a ,   a l p h a 0.1,1.0,2.0,4.0[42]
GTO P ,   B e t a ,   w 0.03,3,0.8[43]
SBOA b e t a 1.5[44]
SAO k 1[45]
RIME W 5[46]
GRO s i g m a _ i n i t i a l 2[47]
RBMO E p s i l o n 0.5[48]
ED g 1[16]
HHWOA w 3[49]
RTH A ,   R 0 ,   r 15,0.5,1.5[50]
Table 3. Results of various algorithms tested on the CEC 2017 benchmark (dim = 30).
Table 3. Results of various algorithms tested on the CEC 2017 benchmark (dim = 30).
IDMetricCSAGTOSBOASAORIMEGRORBMOEDHHWOAIGWORTHIRTH
F1mean2.5268× 1045.0944× 1035.6114× 1034.0570× 1033.9372× 1052.3385× 1067.0808× 1022.9724× 1032.5445× 1034.3980× 1055.3680× 1031.7175× 103
std3.1327× 1045.8841× 1035.9302× 1034.0759× 1032.2904× 1055.2160× 1067.8384× 1022.4910× 1033.5655× 1032.7221× 1055.0480× 1031.7021× 103
F3mean3.2621× 1041.0753× 1036.7930× 1036.6223× 1045.6882× 1033.4128× 1043.0090× 1028.0829× 1043.0000× 1025.1794× 1033.0000× 1048.3308× 102
std8.4141× 1039.5157× 1024.1221× 1031.7565× 1043.3352× 1038.4822× 1037.8950× 10-11.5840× 1041.4593× 10-23.3919× 1032.0762× 10-92.6554× 104
F4mean5.2383× 1024.8913× 1024.9312× 1024.9602× 1025.0150× 1025.1239× 1024.8577× 1024.9204× 1024.7482× 1024.9921× 1024.2103× 1024.7185× 102
std3.4100× 1012.6032× 1012.4609× 1011.6715× 1012.0474× 1011.7495× 1011.4386× 1012.2995× 1013.2282× 1011.3031× 1012.9404× 1012.1995× 101
F5mean5.9570× 1026.8035× 1025.5860× 1025.5188× 1025.7958× 1025.7873× 1025.5124× 1026.4815× 1025.8568× 1025.6833× 1026.6768× 1025.3147× 102
std1.6985× 1013.8921× 1011.2993× 1011.3567× 1011.9803× 1011.8263× 1011.4611× 1011.9400× 1012.0295× 1014.8565× 1013.4121× 1016.1281× 100
F6mean6.1964× 1026.4306× 1026.0034× 1026.0002× 1026.0449× 1026.0393× 1026.0009× 1026.0343× 1026.0522× 1026.0047× 1026.4057× 1026.0000× 102
std4.4496× 1008.2688× 1005.2448× 10-15.5258× 10-22.5034× 1001.4079× 1002.3822× 10-13.5219× 1003.6660× 1002.8919× 10-17.4497× 1001.8704× 10-3
F7mean8.9373× 1021.0750× 1038.0717× 1028.3401× 1028.2481× 1028.1598× 1027.8010× 1028.7468× 1028.6456× 1028.2044× 1021.0637× 1037.5878× 102
std3.3413× 1017.7119× 1013.2540× 1016.8141× 1012.1257× 1013.3944× 1011.2747× 1011.7801× 1013.8633× 1015.6422× 1016.8721× 1014.7457× 100
F8mean8.7336× 1029.4986× 1028.5854× 1028.6081× 1028.8801× 1028.6351× 1028.4982× 1029.4873× 1028.8213× 1028.8031× 1029.3960× 1028.3472× 102
std1.1732× 1012.7647× 1011.7663× 1011.5274× 1012.1151× 1011.4949× 1011.4148× 1011.4678× 1012.0213× 1015.5719× 1012.6524× 1018.2683× 100
F9mean1.7543× 1033.6873× 1039.6105× 1029.1016× 1021.7802× 1031.2072× 1039.0813× 1021.2338× 1031.2185× 1039.1110× 1024.2867× 1039.0145× 102
std3.5944× 1028.5773× 1029.0969× 1012.8312× 1011.1670× 1033.1820× 1027.0088× 1002.4491× 1022.4972× 1022.2092× 1016.8867× 1028.2351× 10-1
F10mean4.9250× 1035.6615× 1034.0576× 1033.7100× 1034.4685× 1034.1210× 1034.6323× 1035.0919× 1034.7873× 1036.4184× 1035.1349× 1034.4389× 103
std6.9564× 1021.0034× 1036.5293× 1025.8129× 1025.4907× 1025.9292× 1026.2115× 1023.1000× 1027.0310× 1022.2205× 1036.6262× 1025.1252× 102
F11mean1.2986× 1031.2297× 1031.1767× 1031.1661× 1031.2948× 1031.2046× 1031.1664× 1031.1865× 1031.2019× 1031.1795× 1031.2611× 1031.1477× 103
std5.3558× 1014.7065× 1013.7780× 1014.1435× 1016.0997× 1013.6968× 1013.1578× 1013.9749× 1014.9937× 1012.7819× 1016.0999× 1012.3393× 101
F12mean1.6852× 1071.5626× 1055.9525× 1053.6164× 1051.0177× 1071.0509× 1061.8974× 1043.3351× 1054.3021× 1041.8439× 1062.3416× 1044.5370× 104
std1.8270× 1071.2979× 1055.1766× 1053.2484× 1055.5728× 1067.9535× 1051.6458× 1042.9926× 1052.3693× 1041.5455× 1061.4730× 1041.7524× 104
F13mean9.6833× 1042.0404× 1041.9129× 1042.1458× 1045.1582× 1042.1763× 1046.3328× 1033.5871× 1041.7913× 1041.1293× 1051.8770× 1045.8237× 103
std4.7553× 1042.3427× 1041.9776× 1041.9010× 1044.3704× 1041.5582× 1041.3187× 1042.1632× 1041.8586× 1046.5731× 1042.1165× 1045.1114× 103
F14mean2.4833× 1032.7080× 1039.5137× 1032.8358× 1043.4860× 1042.0049× 1041.4566× 1033.2602× 1041.4693× 1038.3857× 1031.7526× 1035.0657× 103
std1.6214× 1031.9598× 1039.1744× 1032.5111× 1043.0256× 1042.0971× 1041.0199× 1012.1784× 1043.0598× 1016.5497× 1031.5094× 1022.6795× 103
F15mean1.8524× 1046.5373× 1031.1582× 1045.4142× 1031.5255× 1045.7455× 1031.6705× 1033.4160× 1031.5549× 1031.9376× 1048.2909× 1036.0515× 103
std8.4935× 1038.7413× 1031.1323× 1044.7554× 1031.1794× 1044.6040× 1037.0039× 1013.2952× 1036.8061× 1011.2344× 1049.7063× 1034.3110× 103
F16mean2.4702× 1032.6175× 1032.1408× 1032.4225× 1032.5512× 1032.1658× 1032.2663× 1032.9281× 1032.4505× 1032.0412× 1032.8202× 1032.1409× 103
std2.5701× 1023.1808× 1023.5859× 1023.1980× 1022.8263× 1022.5067× 1022.8021× 1021.3966× 1022.8544× 1023.1832× 1022.7885× 1021.7034× 102
F17mean1.9548× 1032.2642× 1031.9024× 1032.0244× 1032.0566× 1031.8389× 1031.9563× 1032.1257× 1032.1134× 1031.8447× 1032.4736× 1031.8893× 103
std1.0690× 1022.1529× 1027.1183× 1011.9155× 1021.6076× 1026.1016× 1011.1669× 1021.1645× 1021.9547× 1029.9647× 1013.0046× 1027.2373× 101
F18mean9.8052× 1045.6765× 1043.1710× 1052.6613× 1056.0203× 1052.2997× 1051.9150× 1036.6305× 1058.1518× 1032.1405× 1051.6974× 1041.2874× 105
std9.5702× 1043.8478× 1042.2588× 1051.5736× 1053.8019× 1052.0789× 1053.2112× 1013.3742× 1059.7381× 1031.8128× 1051.5522× 1045.0433× 104
F19mean6.7405× 1044.4112× 1039.9577× 1035.2815× 1031.7592× 1048.1353× 1031.9403× 1031.3944× 1043.3769× 1031.4321× 1047.1436× 1035.1198× 103
std6.6497× 1042.4250× 1031.1488× 1043.7535× 1031.7264× 1048.6402× 1031.6188× 1011.1237× 1045.3192× 1031.6189× 1046.1495× 1032.3527× 103
F20mean2.2755× 1032.5010× 1032.1971× 1032.3430× 1032.4048× 1032.2679× 1032.2467× 1032.5133× 1032.4621× 1032.1777× 1032.6998× 1032.2105× 103
std1.0059× 1021.6603× 1028.6822× 1011.6287× 1021.5542× 1028.7595× 1011.1969× 1021.3795× 1022.1983× 1021.1596× 1021.9699× 1027.9614× 101
F21mean2.3783× 1032.4479× 1032.3507× 1032.3556× 1032.3966× 1032.3550× 1032.3532× 1032.4475× 1032.3845× 1032.3528× 1032.4619× 1032.3355× 103
std2.1547× 1014.2027× 1011.1255× 1011.1578× 1012.4693× 1012.7813× 1011.2784× 1011.7155× 1012.3664× 1013.3846× 1013.8648× 1019.0451× 100
F22mean3.4928× 1032.8468× 1032.3006× 1032.4571× 1033.9296× 1032.3102× 1034.1847× 1035.9670× 1033.3583× 1032.7294× 1034.1830× 1033.4742× 103
std2.0240× 1031.6877× 1031.3269× 1005.9712× 1021.8190× 1035.3558× 1001.8688× 1031.4867× 1031.8077× 1031.6197× 1032.3756× 1031.5783× 103
F23mean2.7519× 1032.8423× 1032.7009× 1032.7045× 1032.7520× 1032.7135× 1032.7220× 1032.8080× 1032.7652× 1032.7198× 1032.8491× 1032.6869× 103
std2.9881× 1017.8571× 1011.6247× 1011.7399× 1012.3409× 1012.2913× 1011.9440× 1011.9828× 1013.3474× 1014.8184× 1016.3590× 1018.4032× 100
F24mean2.9001× 1033.0052× 1032.8690× 1032.8790× 1032.9156× 1032.8842× 1032.8926× 1032.9805× 1032.9322× 1032.8639× 1033.0385× 1032.8593× 103
std2.6569× 1016.6312× 1011.7175× 1011.1739× 1012.6339× 1011.7191× 1012.3424× 1012.7541× 1013.6810× 1013.5838× 1017.6049× 1018.9635× 100
F25mean2.9481× 1032.9021× 1032.8961× 1032.8873× 1032.9035× 1032.9160× 1032.8873× 1032.8931× 1032.8971× 1032.8875× 1032.8930× 1032.8868× 103
std2.6858× 1011.9247× 1011.8302× 1012.1846× 1002.0192× 1012.1323× 1011.4172× 1001.2254× 1011.5651× 1011.3974× 1001.4364× 1011.3559× 100
F26mean4.8285× 1035.2719× 1034.0407× 1034.0335× 1034.6365× 1033.6706× 1034.3195× 1034.8802× 1034.9149× 1033.9018× 1036.0434× 1033.9691× 103
std5.4546× 1021.6802× 1034.7961× 1024.4870× 1023.9354× 1026.4601× 1025.1214× 1027.9276× 1023.7984× 1024.9258× 1029.7525× 1022.4801× 102
F27mean3.2644× 1033.2740× 1033.2074× 1033.2200× 1033.2315× 1033.2342× 1033.2242× 1033.2466× 1033.2517× 1033.2011× 1033.2656× 1033.2143× 103
std2.5752× 1013.9854× 1019.8523× 1001.0434× 1011.3605× 1011.2428× 1011.4627× 1011.5280× 1012.8241× 1019.5231× 1003.2659× 1017.4162× 100
F28mean3.2989× 1033.2286× 1033.2185× 1033.2136× 1033.2661× 1033.2650× 1033.2191× 1033.2167× 1033.1430× 1033.2221× 1033.1347× 1033.1904× 103
std2.0453× 1013.1479× 1012.8860× 1012.7152× 1015.7204× 1012.5751× 1012.5964× 1011.7020× 1015.3816× 1011.3912× 1015.9865× 1013.9162× 101
F29mean3.8367× 1034.2136× 1033.5175× 1033.6672× 1033.8575× 1033.6413× 1033.7100× 1033.7783× 1033.8091× 1033.4687× 1034.0371× 1033.5198× 103
std2.4369× 1023.4434× 1021.0040× 1022.2322× 1022.2301× 1021.0469× 1021.1610× 1021.1992× 1021.8123× 1028.5289× 1012.6906× 1027.9498× 101
F30mean1.5774× 1061.1793× 1042.0090× 1049.0335× 1032.0499× 1051.7182× 1046.7753× 1035.8706× 1041.0583× 1041.3207× 1057.8287× 1037.8607× 103
std1.5468× 1064.9169× 1033.7118× 1043.2975× 1032.1407× 1059.8608× 1031.3711× 1031.5394× 1053.7594× 1036.4870× 1042.0481× 1031.5336× 103
Table 4. Results of various algorithms tested on the CEC 2017 benchmark (dim = 50).
Table 4. Results of various algorithms tested on the CEC 2017 benchmark (dim = 50).
IDMetricCSAGTOSBOASAORIMEGRORBMOEDHHWOAIGWORTHIRTH
F1mean1.0061× 1082.6932× 1049.8143× 1033.3774× 1034.6685× 1066.1917× 1034.3025× 1033.0661× 1046.3579× 1031.3296× 1074.9340× 1031.0207× 103
std6.6998× 1074.0439× 1047.8747× 1034.0025× 1031.5645× 1066.6753× 1086.4880× 1033.7046× 1048.1640× 1036.8870× 1067.2141× 1037.7973× 102
F3mean1.1318× 1052.4888× 1044.4634× 1042.3680× 1057.1933× 1041.1173× 1051.5347× 1032.2685× 1051.6399× 1033.3718× 1046.8803× 1022.3112× 104
std1.9925× 1049.0939× 1031.0044× 1044.9013× 1041.6775× 1041.4506× 1046.6635× 1023.0384× 1041.7714× 1037.1836× 1035.3315× 1024.2277× 103
F4mean7.2450× 1025.8271× 1025.5534× 1025.5250× 1026.2373× 1027.1388× 1025.4743× 1025.5223× 1025.2842× 1025.7996× 1024.7900× 1025.1686× 102
std5.6940× 1015.6356× 1015.2002× 1015.3107× 1016.5492× 1019.4400× 1015.1874× 1016.8848× 1016.0806× 1014.4654× 1014.7455× 1014.3572× 101
F5mean7.2730× 1028.3009× 1026.6839× 1026.3834× 1026.8296× 1027.0489× 1026.1652× 1028.4007× 1026.9747× 1026.5099× 1028.1089× 1025.7340× 102
std3.5773× 1014.2449× 1013.6159× 1016.7996× 1014.5728× 1013.0326× 1012.3098× 1013.2277× 1014.2042× 1017.0291× 1013.7451× 1011.0156× 101
F6mean6.3634× 1026.5634× 1026.0475× 1026.0045× 1026.1398× 1026.1532× 1026.0074× 1026.2338× 1026.1949× 1026.0235× 1026.5042× 1026.0017× 102
std5.1760× 1008.0573× 1002.3656× 1003.7724× 10-15.3362× 1004.3506× 1004.3140× 10-15.5694× 1007.8254× 1001.0683× 1006.5773× 1005.8131× 10-2
F7mean1.2148× 1031.4666× 1039.9864× 1021.1730× 1039.9296× 1021.0137× 1038.8313× 1021.1366× 1031.1452× 1039.7105× 1021.4861× 1038.3252× 102
std7.4495× 1011.2384× 1027.9150× 1015.6625× 1016.6777× 1016.6363× 1012.6777× 1013.3048× 1019.3219× 1019.8242× 1011.1143× 1021.9228× 101
F8mean9.9059× 1021.1498× 1039.6129× 1029.3514× 1029.9077× 1021.0042× 1039.2585× 1021.1285× 1031.0010× 1039.3885× 1021.1299× 1038.6822× 102
std3.1994× 1014.1361× 1013.5935× 1016.4712× 1013.9415× 1014.1073× 1012.8497× 1012.9867× 1014.5153× 1014.7389× 1014.6002× 1011.2269× 101
F9mean5.5287× 1031.1214× 1032.3187× 1039.6698× 1035.3978× 1033.8155× 1031.0987× 1039.8008× 1033.0773× 1031.4590× 1031.1068× 1039.5177× 103
std1.4208× 1031.6148× 1036.6211× 1021.0880× 1022.3403× 1031.5112× 1032.3884× 1024.1579× 1039.2371× 1026.6414× 1021.3211× 1033.4327× 101
F10mean8.4274× 1039.3183× 1036.5729× 1037.0743× 1037.2573× 1037.5917× 1038.0732× 1038.8150× 1037.7771× 1031.2411× 1048.0583× 1037.3729× 103
std1.5206× 1031.9628× 1037.7841× 1022.1516× 1039.1081× 1026.7791× 1028.7808× 1024.7207× 1028.3962× 1023.3366× 1031.0747× 1036.9001× 102
F11mean2.2089× 1031.3014× 1031.2605× 1031.4837× 1031.5163× 1032.0220× 1031.2534× 1031.6398× 1031.3536× 1031.4213× 1031.3374× 1031.2134× 103
std3.3316× 1023.7047× 1014.8919× 1012.4635× 1027.3619× 1015.0661× 1024.1089× 1011.5333× 1026.4632× 1017.1821× 1018.7637× 1012.4017× 101
F12mean1.9042× 1084.3600× 1063.4325× 1064.0049× 1068.1233× 1071.2137× 1075.3393× 1053.5426× 1068.7679× 1052.3543× 1072.1962× 1051.1081× 106
std1.1068× 1083.8771× 1062.5735× 1062.4617× 1065.3087× 1076.0742× 1064.0476× 1053.0959× 1064.8006× 1051.4855× 1071.2838× 1054.4008× 105
F13mean1.1326× 1051.5210× 1031.2579× 1037.4823× 1032.0486× 1058.2442× 1031.4485× 1039.4118× 1038.7782× 1033.8757× 1058.6025× 1038.3578× 103
std8.6611× 1041.1100× 1041.1711× 1047.6736× 1038.8566× 1043.5245× 1031.2197× 1047.6332× 1038.0889× 1031.6366× 1058.6463× 1036.2844× 103
F14mean1.0699× 1052.9181× 1041.4552× 1057.6592× 1042.3580× 1051.9410× 1051.5742× 1033.9817× 1058.3630× 1035.9020× 1046.2833× 1035.3734× 104
std8.4716× 1042.5792× 1048.3258× 1045.3496× 1041.4515× 1051.3863× 1053.0708× 1012.1319× 1056.2333× 1034.8849× 1043.5226× 1032.3916× 104
F15mean5.0313× 1041.6094× 1041.1361× 1041.2034× 1046.0197× 1041.0276× 1049.1573× 1031.3718× 1041.2448× 1049.8461× 1049.6313× 1035.2293× 103
std2.8826× 1048.6353× 1037.2988× 1036.1921× 1033.7033× 1045.8115× 1031.0277× 1041.0212× 1049.0788× 1034.6299× 1048.0729× 1033.0271× 103
F16mean3.2291× 1033.5500× 1032.6084× 1033.1167× 1033.5700× 1032.6998× 1033.1046× 1034.0171× 1033.4757× 1032.7538× 1033.7384× 1032.8895× 103
std4.6609× 1024.3251× 1023.7668× 1024.3147× 1023.4989× 1022.7874× 1023.4405× 1023.0363× 1024.6972× 1027.1569× 1024.3891× 1023.4816× 102
F17mean2.9195× 1033.4006× 1032.6491× 1032.6760× 1033.2685× 1032.6841× 1032.8503× 1033.2643× 1033.1381× 1032.6349× 1033.4887× 1032.6727× 103
std2.7226× 1023.2628× 1022.5667× 1023.1011× 1023.7153× 1022.2467× 1022.8814× 1023.1314× 1023.1099× 1025.9631× 1024.0167× 1022.2049× 102
F18mean7.7975× 1051.5975× 1051.7484× 1061.2684× 1063.1366× 1061.4659× 1062.5464× 1033.5409× 1063.7819× 1069.6780× 1054.1765× 1068.5722× 105
std5.8282× 1059.5450× 1049.3749× 1051.6958× 1062.1994× 1067.4807× 1052.9827× 1022.3298× 1063.4000× 1047.4831× 1052.9673× 1043.5905× 105
F19mean9.1656× 1051.7667× 1041.7495× 1041.9516× 1044.9398× 1042.0995× 1046.8977× 1039.6275× 1031.6169× 1045.4529× 1041.3777× 1041.2172× 104
std7.8390× 1051.0998× 1041.3175× 1041.0569× 1043.2077× 1041.0397× 1041.1055× 1041.0207× 1041.1239× 1042.7900× 1041.0042× 1048.2462× 103
F20mean2.8859× 1033.1936× 1032.6903× 1032.8369× 1033.1126× 1032.7344× 1032.9847× 1033.4999× 1033.1199× 1032.8880× 1033.3303× 1032.8381× 103
std3.3196× 1023.2576× 1022.8019× 1023.1167× 1022.6189× 1021.9139× 1022.1919× 1021.3929× 1023.0706× 1025.0422× 1022.8285× 1021.9769× 102
F21mean2.5010× 1032.6350× 1032.4226× 1032.4262× 1032.4864× 1032.4658× 1032.4300× 1032.6484× 1032.4952× 1032.4187× 1032.6376× 1032.3748× 103
std2.6116× 1016.0906× 1012.5396× 1012.1516× 1014.2529× 1013.0487× 1013.1347× 1013.1080× 1014.1563× 1012.4425× 1016.0141× 1011.6028× 101
F22mean9.8196× 1031.1038× 1047.0325× 1037.8648× 1039.1403× 1037.4484× 1039.4121× 1031.1086× 1049.5720× 1031.2659× 1049.9843× 1038.9756× 103
std1.9645× 1031.2783× 1032.5282× 1032.4530× 1031.0792× 1033.1062× 1031.7929× 1034.5495× 1021.1309× 1034.0602× 1038.3250× 1025.7891× 102
F23mean3.0373× 1033.1887× 1032.8625× 1032.8521× 1032.9503× 1032.9276× 1032.9452× 1033.1068× 1033.0225× 1032.8881× 1033.2158× 1032.8315× 103
std5.9087× 1018.9178× 1013.6448× 1012.9978× 1014.7530× 1013.0110× 1016.3324× 1013.7490× 1015.3944× 1019.7407× 1011.1214× 1021.8742× 101
F24mean3.1596× 1033.3775× 1033.0159× 1033.0341× 1033.1170× 1033.0896× 1033.1221× 1033.3132× 1033.1757× 1033.0293× 1033.3767× 1032.9901× 103
std6.5399× 1011.1499× 1023.2357× 1018.2431× 1015.1140× 1013.2318× 1016.2483× 1016.2619× 1016.8836× 1019.0477× 1011.1748× 1021.3382× 101
F25mean3.2827× 1033.1116× 1033.0896× 1033.0399× 1033.1040× 1033.2833× 1033.0701× 1033.0838× 1033.0522× 1033.0886× 1033.0560× 1033.0398× 103
std7.6004× 1012.4692× 1012.3398× 1012.8291× 1014.3321× 1017.6243× 1012.6078× 1013.3028× 1013.7189× 1013.1530× 1014.1819× 1012.9817× 101
F26mean7.3727× 1038.4232× 1035.3702× 1034.9564× 1035.7954× 1035.5915× 1035.6434× 1037.1172× 1037.2445× 1035.2327× 1038.6120× 1034.6721× 103
std9.2851× 1023.1097× 1031.5448× 1032.0330× 1026.7797× 1021.2721× 1035.1427× 1023.3136× 1029.8059× 1027.5562× 1022.6897× 1031.7545× 102
F27mean3.7627× 1033.7985× 1033.3206× 1033.3398× 1033.4973× 1033.5686× 1033.4251× 1033.7948× 1033.6474× 1033.2972× 1033.6495× 1033.3554× 103
std1.6583× 1021.8379× 1024.0717× 1016.4461× 1017.4864× 1017.5700× 1011.2615× 1021.2441× 1021.4977× 1024.0092× 1011.3327× 1026.0297× 101
F28mean3.7798× 1033.3638× 1033.3493× 1033.2945× 1033.3612× 1033.6432× 1033.3358× 1033.3841× 1033.3132× 1033.3481× 1033.3020× 1033.3046× 103
std1.3824× 1024.9948× 1013.5706× 1012.4935× 1013.1899× 1011.0693× 1022.8017× 1013.0300× 1012.4819× 1013.9802× 1013.8078× 1011.2283× 101
F29mean5.1170× 1035.1110× 1033.7409× 1033.8490× 1034.6146× 1034.1729× 1034.3188× 1034.8887× 1034.6566× 1033.7495× 1034.7027× 1033.8929× 103
std4.5475× 1026.6936× 1022.2728× 1023.2789× 1024.1467× 1022.3171× 1022.9683× 1024.4946× 1023.7499× 1021.5925× 1024.2710× 1022.1670× 102
F30mean9.4168× 1071.2771× 1069.5028× 1059.4314× 1052.5384× 1071.4581× 1061.8047× 1062.8306× 1061.0559× 1068.3887× 1067.9462× 1058.9353× 105
std2.8810× 1074.3268× 1051.7820× 1051.6497× 1051.1156× 1073.1136× 1051.0419× 1061.0731× 1063.5146× 1052.3153× 1061.4337× 1051.1944× 105
Table 5. Results of various algorithms tested on the CEC 2017 benchmark (dim = 100).
Table 5. Results of various algorithms tested on the CEC 2017 benchmark (dim = 100).
IDMetricCSAGTOSBOASAORIMEGRORBMOEDHHWOAIGWORTHIRTH
F1mean1.0796× 10101.2291× 1081.9079× 1081.4125× 1089.2498× 1072.8043× 10103.2114× 1073.5563× 1086.2511× 1038.4984× 1094.8468× 1032.9204× 105
std2.3153× 1091.1691× 1082.8426× 1089.5248× 1072.0914× 1078.3110× 1091.7226× 1072.4699× 1087.5622× 1033.6494× 1095.8996× 1032.4211× 105
F3mean3.6575× 1051.6701× 1052.4996× 1057.3205× 1054.9513× 1053.1793× 1058.1763× 1046.2053× 1053.0502× 1052.6815× 1056.6119× 1042.1178× 105
std3.6090× 1042.0188× 1042.4605× 1041.5136× 1056.4844× 1042.2025× 1041.4554× 1048.1622× 1044.4695× 1043.4550× 1041.7946× 1041.6732× 104
F4mean2.2867× 1031.0400× 1039.2686× 1027.5589× 1029.1987× 1023.2238× 1038.8382× 1021.0617× 1037.1074× 1021.4502× 1036.7215× 1027.0039× 102
std3.6033× 1021.6256× 1028.1214× 1016.4034× 1019.7811× 1015.4772× 1026.2345× 1019.8521× 1014.3889× 1013.0432× 1024.3372× 1014.9982× 101
F5mean1.2132× 1031.3331× 1039.9814× 1021.1473× 1031.0565× 1031.1903× 1039.5088× 1021.5208× 1031.0952× 1039.6144× 1021.2924× 1037.3373× 102
std5.9548× 1015.2029× 1018.8253× 1012.1584× 1029.2563× 1016.0460× 1018.1137× 1017.7780× 1019.1349× 1011.0623× 1026.8410× 1013.2536× 101
F6mean6.5615× 1026.6147× 1026.2481× 1026.1112× 1026.3541× 1026.4021× 1026.1012× 1026.5117× 1026.3956× 1026.1177× 1026.5239× 1026.0383× 102
std3.4967× 1003.6748× 1007.3954× 1002.2631× 1005.2757× 1005.2722× 1002.7333× 1002.5228× 1005.4985× 1002.3041× 1003.4803× 1007.0743× 10-1
F7mean2.6702× 1032.9435× 1031.9067× 1031.9674× 1031.6975× 1032.1054× 1031.3340× 1032.6006× 1032.2952× 1031.4707× 1032.9192× 1031.1375× 103
std1.9015× 1021.8111× 1021.8453× 1026.8935× 1011.6381× 1021.8256× 1021.1242× 1021.9477× 1022.1745× 1021.3233× 1021.6237× 1023.6184× 101
F8mean1.5488× 1031.7576× 1031.2899× 1031.3881× 1031.3501× 1031.4761× 1031.2193× 1031.8030× 1031.4177× 1031.2549× 1031.6695× 1031.0453× 103
std7.6411× 1018.0124× 1017.3924× 1012.5748× 1028.7527× 1017.5646× 1017.5834× 1017.1840× 1018.3247× 1011.1857× 1028.4022× 1012.7592× 101
F9mean2.1637× 1042.3710× 1041.7973× 1049.3598× 1032.7913× 1042.1153× 1046.1673× 1035.2880× 1041.3210× 1041.7740× 1042.2005× 1042.9671× 103
std2.8353× 1031.4288× 1033.5589× 1034.7432× 1031.1286× 1043.7656× 1032.3224× 1037.9331× 1033.3169× 1038.3005× 1031.5265× 1035.4116× 102
F10mean1.8640× 1041.6711× 1041.5033× 1042.1979× 1041.7178× 1041.9026× 1041.9198× 1042.3004× 1041.6596× 1042.6118× 1041.5900× 1041.6966× 104
std1.6322× 1032.9862× 1031.5073× 1037.9384× 1031.4149× 1031.5011× 1031.7005× 1037.1066× 1021.4348× 1037.4900× 1031.4060× 1031.3518× 103
F11mean5.2436× 1046.8503× 1031.2879× 1041.3678× 1057.2701× 1035.2460× 1043.2691× 1037.6291× 1042.3262× 1031.2675× 1042.2341× 1033.6329× 103
std1.2015× 1042.0028× 1034.5184× 1033.5820× 1031.1316× 1039.9105× 1033.3443× 1021.2533× 1032.3293× 1024.4250× 1032.4816× 1025.5426× 102
F12mean1.4599× 1098.6200× 1075.5383× 1074.3425× 1077.0047× 1081.3882× 1092.6548× 1076.6270× 1077.7935× 1064.2442× 1082.5147× 1067.4138× 106
std4.2559× 1087.5239× 1073.4302× 1072.2567× 1072.7708× 1088.8712× 1081.7397× 1072.8065× 1074.0611E+061.7948× 1081.2313× 1062.9190× 106
F13mean5.7871× 1042.0412× 1041.2586× 1045.0653× 1033.0502× 1054.6621× 1051.1527× 1041.0654× 1041.2346× 1044.6021× 1059.7754× 1034.2581× 103
std2.1425× 1048.0092× 1037.6554× 1032.5408× 1039.2779× 1045.0084× 1051.3190× 1046.8587× 1034.6171× 1032.5097× 1056.2239× 1031.8084× 103
F14mean1.8320× 1063.4519× 1051.5068× 1067.4591× 1054.2998× 1062.0630× 1062.3440× 1034.4042× 1069.4906× 1041.2201× 1065.6723× 1049.7278× 105
std1.1244× 1061.4883× 1058.1152× 1053.8218× 1052.3267× 1066.6870× 1054.3042× 1022.3672× 1063.6996× 1046.5436× 1052.8685× 1043.5057× 105
F15mean4.7306× 1041.0331× 1049.3792× 1033.4138× 1031.4111× 1058.4546× 1035.9420× 1035.4500× 1034.8994× 1031.5791× 1056.3513× 1033.5015× 103
std2.2130× 1046.7643× 1031.9862× 1041.8422× 1035.0047× 1043.1098× 1032.8072× 1034.1157× 1032.7186× 1039.8522× 1046.9279× 1031.1830× 103
F16mean7.0208× 1036.4016× 1034.8320× 1035.3092× 1036.7794× 1035.7443× 1036.1604× 1038.8893× 1035.6002× 1034.7585× 1035.8946× 1035.2741× 103
std6.3029× 1027.1431× 1026.0708× 1021.1795× 1036.9960× 1027.4077× 1025.4160× 1021.1174× 1037.7195× 1025.5010× 1027.1371× 1025.1970× 102
F17mean5.4594× 1035.9235× 1034.5471× 1034.9120× 1035.4187× 1034.4947× 1035.1200× 1036.1932× 1035.3864× 1034.5835× 1035.8588× 1034.5123× 103
std5.6769× 1027.9792× 1026.5705× 1029.3993× 1026.6691× 1023.8943× 1026.2444× 1027.5057× 1026.6045× 1021.2004× 1035.9552× 1024.4271× 102
F18mean2.6621× 1066.7322× 1052.5007× 1063.5943× 1066.5375× 1064.1983× 1069.3063× 1041.6759× 1072.9216× 1053.0857× 1062.2075× 1051.4884× 106
std1.6010× 1062.6362× 1051.2516× 1062.2512× 1063.1007× 1061.8357× 1064.1975× 1041.1618× 1071.4517× 1051.6189× 1061.1547× 1055.6961× 105
F19mean2.0478× 1077.3673× 1038.8842× 1035.8296× 1038.2060× 1061.2716× 1048.7776× 1035.9345× 1038.2243× 1033.3839× 1057.4914× 1033.8244× 103
std1.5773× 1074.9003× 1031.6854× 1044.6241× 1035.7651× 1061.1651× 1049.2107× 1035.2469× 1038.5088× 1032.3844× 1055.8117× 1031.3403× 103
F20mean4.8156× 1035.5846× 1034.2361× 1035.5287× 1035.6094× 1034.6611× 1034.9284× 1036.6569× 1035.2328× 1035.2528× 1035.4210× 1034.6769× 103
std5.1783× 1025.7573× 1026.9821× 1021.6169× 1035.4963× 1024.5768× 1024.5800× 1023.4571× 1024.7387× 1021.3730× 1034.3822× 1025.4930× 102
F21mean3.0769× 1033.3189× 1032.7472× 1032.8097× 1032.9303× 1032.9238× 1032.8678× 1033.3408× 1032.9789× 1032.7703× 1033.3226× 1032.5931× 103
std9.6224× 1011.2900× 1026.6601× 1011.5486× 1029.5086× 1015.7827× 1019.2293× 1019.9453× 1011.2854× 1021.1572× 1021.7464× 1023.3507× 101
F22mean2.3682× 1042.2456× 1041.7810× 1041.8420× 1041.9373× 1042.1876× 1042.2099× 1042.4743× 1041.8637× 1042.9514× 1041.9858× 1041.9522× 104
std2.2739× 1033.1970× 1033.2386× 1034.4716× 1031.5509× 1033.0638× 1032.4624× 1033.6785× 1021.4238× 1037.5541× 1031.1098× 1031.2455× 103
F23mean3.7322× 1034.0449× 1033.2214× 1033.1615× 1033.4100× 1033.5780× 1033.5774× 1033.9007× 1033.6484× 1033.2099× 1033.7893× 1033.1388× 103
std1.2203× 1022.5004× 1026.3055× 1015.1478× 1016.3659× 1015.9218× 1011.1621× 1021.3628× 1021.4742× 1025.4358× 1011.3995× 1024.3529× 101
F24mean4.6520× 1035.0088× 1033.8244× 1033.6369× 1034.0183× 1034.2925× 1034.1980× 1034.5563× 1034.4495× 1033.6930× 1034.6006× 1033.6110× 103
std2.2888× 1025.3163× 1021.0782× 1028.4064× 1011.1986× 1021.0559× 1021.4439× 1022.3528× 1022.7535× 1027.9090× 1011.9475× 1024.3077× 101
F25mean4.9747× 1033.7091× 1033.6117× 1033.4974× 1033.6249× 1035.1695× 1033.5389× 1033.7553× 1033.3525× 1034.1318× 1033.3077× 1033.3562× 103
std3.9930× 1029.7696× 1017.4014× 1013.9488× 1019.2461× 1015.0022× 1026.2828× 1011.0524× 1026.7634× 1011.9647× 1025.9860× 1015.1087× 101
F26mean1.9919× 1042.3920× 1041.4278× 1049.4952× 1031.3588× 1042.0100× 1041.3091× 1041.8071× 1041.7686× 1041.0887× 1042.1498× 1049.0624× 103
std1.8088× 1033.2193× 1033.1031× 1035.0358× 1021.3213× 1033.3593× 1031.9563× 1031.6297× 1031.5716× 1037.6587× 1022.2914× 1035.5218× 102
F27mean4.2327× 1034.1328× 1033.5945× 1033.4367× 1033.8055× 1034.0934× 1033.5445× 1034.0157× 1033.9980× 1033.5194× 1033.7098× 1033.5263× 103
std2.4959× 1024.1808× 1027.7135× 1014.9131× 1011.3087× 1021.4688× 1027.7158× 1012.4917× 1022.4679× 1024.7968× 1011.2183× 1023.9839× 101
F28mean5.6824× 1033.7775× 1033.7433× 1033.5264× 1033.7073× 1036.8902× 1033.7673× 1034.3062× 1033.4623× 1034.5759× 1033.4018× 1033.4917× 103
std4.9773× 1029.3473× 1016.0429× 1013.9822× 1016.0461× 1018.5430× 1021.3487× 1024.1938× 1023.7518× 1014.0310× 1023.6614× 1012.4332× 101
F29mean1.0688× 1048.2739× 1036.1100× 1036.0089× 1038.5881× 1037.3525× 1037.3305× 1038.1931× 1037.4736× 1036.2400× 1037.3982× 1036.4274× 103
std1.1410× 1036.4737× 1026.7163× 1025.0757× 1027.9917× 1024.7817× 1026.1574× 1021.4097× 1036.4783× 1025.4218× 1025.7776× 1023.1264× 102
F30mean2.5154× 1084.1555× 1056.0889× 1043.3083× 1047.2816× 1073.9971× 1065.1329× 1041.3432× 1062.8723× 1041.2941× 1071.2044× 1041.2884× 104
std1.0039× 1082.2547× 1056.8850× 1042.0488× 1043.1741× 1073.0652× 1065.2548× 1041.3423× 1063.1480× 1046.6092× 1064.7693× 1032.7861× 103
Table 6. Results for various algorithms on the CEC 2017.
Table 6. Results for various algorithms on the CEC 2017.
Statistical ResultsCSAGTOSBOASAORIMEGRORBMOEDHHWOAIGWORTH
Dim = 30 (+/=/−)30/0/027/1/227/3/028/2/030/0/029/1/021/3/629/1/022/4/425/5/024/0/6
Dim = 50 (+/=/−)30/0/028/0/225/4/127/3/030/0/028/2/025/2/330/0/025/2/327/1/222/4/4
Dim = 100 (+/=/−)30/0/027/0/325/5/027/2/130/0/030/0/026/1/330/0/024/2/425/5/022/5/3
Table 7. Friedman mean rank test results.
Table 7. Friedman mean rank test results.
SuitesCEC2017
Dimensions3050100
Algorithms M . R T . R M . R T . R M . R T . R
CSA9.17129.55129.7212
GTO8.4498.93108.6210
SBOA4.7634.3434.583
SAO5.0044.3434.583
RIME9.03118.1097.798
GRO6.7277.1388.559
RBMO3.4424.2724.482
ED8.93109.44119.6511
HHWOA6.0066.1365.315
IGWO5.7956.0356.487
RTH7.8987.0075.686
IRTH2.7512.6812.511
Table 8. Experimental results of path planning for each algorithm in scenario 1.
Table 8. Experimental results of path planning for each algorithm in scenario 1.
AlgorithmCSAGTOSBOASAORIMEGRORBMOEDHHWOAIGWORTHIRTH
mean419.61416.11415.17415.19419.99416.63435.07416.94416.05535.52419.16414.26
median418.42415.63414.98415.09419.39417.10428.35416.71415.58533.47417.74414.91
max445.69421.51421.69416.88435.52419.93475.40422.28428.34589.77439.95415.40
min415.66408.92408.00414.44414.64411.06414.99415.53414.39485.18416.23409.81
Table 9. Experimental results of path planning for each algorithm in scenario 2.
Table 9. Experimental results of path planning for each algorithm in scenario 2.
AlgorithmCSAGTOSBOASAORIMEGRORBMOEDHHWOAIGWORTHIRTH
mean418.74414.98415.29414.82419.21417.17431.96417.19415.50543.70418.03413.16
median417.97414.96415.05414.75418.74417.19430.74417.11415.60542.89417.68414.57
max426.49421.63417.93415.97425.32419.16455.78418.86417.06593.32421.11415.21
min415.20410.62414.22413.89416.03411.58416.33415.46414.07504.76416.66409.11
Table 10. Experimental results of path planning for each algorithm in scenario 3.
Table 10. Experimental results of path planning for each algorithm in scenario 3.
AlgorithmCSAGTOSBOASAORIMEGRORBMOEDHHWOAIGWORTHIRTH
mean448.58443.14432.96435.98453.42437.32Inf435.80434.59Inf439.48426.47
median444.99438.96434.45434.07455.38436.99467.35435.69433.48627.88436.34426.56
max482.86493.82438.50455.64476.76441.13Inf440.04458.43Inf488.07432.37
min436.95433.27424.53424.87433.35434.38440.05432.85431.87549.64426.46415.65
Table 11. Experimental results of path planning for each algorithm in scenario 4.
Table 11. Experimental results of path planning for each algorithm in scenario 4.
AlgorithmCSAGTOSBOASAORIMEGRORBMOEDHHWOAIGWORTHIRTH
mean532.25497.29InfInf464.62443.95Inf442.58484.80Inf500.90436.81
median523.86506.76453.80468.00462.77443.32Inf441.07476.71Inf500.81437.33
max658.77553.79InfInf520.56450.93Inf458.01647.34Inf608.92450.47
min447.32435.26418.55417.41426.01439.00464.20428.98414.55532.06433.89415.08
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, M.; Yuan, P.; Hu, P.; Yang, Z.; Ke, S.; Huang, L.; Zhang, P. Multi-Strategy Improved Red-Tailed Hawk Algorithm for Real-Environment Unmanned Aerial Vehicle Path Planning. Biomimetics 2025, 10, 31. https://doi.org/10.3390/biomimetics10010031

AMA Style

Wang M, Yuan P, Hu P, Yang Z, Ke S, Huang L, Zhang P. Multi-Strategy Improved Red-Tailed Hawk Algorithm for Real-Environment Unmanned Aerial Vehicle Path Planning. Biomimetics. 2025; 10(1):31. https://doi.org/10.3390/biomimetics10010031

Chicago/Turabian Style

Wang, Mingen, Panliang Yuan, Pengfei Hu, Zhengrong Yang, Shuai Ke, Longliang Huang, and Pai Zhang. 2025. "Multi-Strategy Improved Red-Tailed Hawk Algorithm for Real-Environment Unmanned Aerial Vehicle Path Planning" Biomimetics 10, no. 1: 31. https://doi.org/10.3390/biomimetics10010031

APA Style

Wang, M., Yuan, P., Hu, P., Yang, Z., Ke, S., Huang, L., & Zhang, P. (2025). Multi-Strategy Improved Red-Tailed Hawk Algorithm for Real-Environment Unmanned Aerial Vehicle Path Planning. Biomimetics, 10(1), 31. https://doi.org/10.3390/biomimetics10010031

Article Metrics

Back to TopTop