Next Article in Journal
Image Encryption Based on Arnod Transform and Fractional Chaotic
Next Article in Special Issue
A Hybrid Imperialist Competitive Algorithm for the Distributed Unrelated Parallel Machines Scheduling Problem
Previous Article in Journal
Hybrid Analysis of the Decision-Making Factors for Software Upgrade Based on the Integration of AHP and DEMATEL
Previous Article in Special Issue
An Ensemble Framework of Evolutionary Algorithm for Constrained Multi-Objective Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modification of the Imperialist Competitive Algorithm with Hybrid Methods for Multi-Objective Optimization Problems

School of Economics and Management, China University of Geosciences (Beijing), Beijing 100083, China
*
Author to whom correspondence should be addressed.
Submission received: 12 December 2021 / Revised: 12 January 2022 / Accepted: 13 January 2022 / Published: 16 January 2022
(This article belongs to the Special Issue Meta-Heuristics for Manufacturing Systems Optimization)

Abstract

:
This paper proposes a modification of the imperialist competitive algorithm to solve multi-objective optimization problems with hybrid methods (MOHMICA) based on a modification of the imperialist competitive algorithm with hybrid methods (HMICA). The rationale for this is that there is an obvious disadvantage of HMICA in that it can only solve single-objective optimization problems but cannot solve multi-objective optimization problems. In order to adapt to the characteristics of multi-objective optimization problems, this paper improves the establishment of the initial empires and colony allocation mechanism and empire competition in HMICA, and introduces an external archiving strategy. A total of 12 benchmark functions are calculated, including 10 bi-objective and 2 tri-objective benchmarks. Four metrics are used to verify the quality of MOHMICA. Then, a new comprehensive evaluation method is proposed, called “radar map method”, which could comprehensively evaluate the convergence and distribution performance of multi-objective optimization algorithm. It can be seen from the four coordinate axes of the radar maps that this is a symmetrical evaluation method. For this evaluation method, the larger the radar map area is, the better the calculation result of the algorithm. Using this new evaluation method, the algorithm proposed in this paper is compared with seven other high-quality algorithms. The radar map area of MOHMICA is at least 14.06% larger than that of other algorithms. Therefore, it is proven that MOHMICA has advantages as a whole.

1. Introduction

In the fields of production processes, engineering applications, management and decision-making within complex systems, multi-objective optimization problems are more common than single-objective problems. However, it is very difficult to achieve a solution to meet the requirement that all objective functions are optimal because of the conflict between various objective functions. Therefore, there is hardly a single global optimal solution, but a set of Pareto optimal solutions balanced by the values of various objective functions will be formed. In this case, the process of solving solutions becomes more complex than single-objective optimization, and it is difficult to obtain multiple uniformly distributed approximate Pareto optimal solution sets. Accordingly, it is of theoretical and practical significance to study the solution for such problems.

1.1. Description of Constrained Optimizaiton

Generally, a multi-objective optimization problem can be described by the Formula (1).
min { f 1 ( x ) , f 2 ( x ) , , f m ( x ) } s . t . g i ( x ) 0 , i = 1 , 2 , p h j ( x ) = 0 , j = 1 , 2 , , q u k x k v k , x R n , k = 1 , 2 , , n
where, { f 1 ( x ) , f 2 ( x ) , , f m ( x ) } represents the individual objective function. g i ( x ) 0 is the i-th inequality constraint in optimization problem in the Formula (1), and p is the number of inequality constraints. h j ( x ) = 0 is the j-th equation constraint, and q is the number of equation constraints. u k and v k are the upper and lower bounds of x k , respectively. The set D = { x S | g i ( x ) 0 , h j ( x ) = 0 ,   i = 1 , 2 , p , j = 1 , 2 , , q } that meets all inequality and equality constraints in the search space S = { u k x k v k ,   x R n , k = 1 , 2 , , n } is called the feasible region of the constrained optimization problem in the Formula (1). If a group solution x D , x is called a feasible solution; otherwise, it is called an infeasible solution. For two group of solutions x 1 = ( x 11 , x 12 , , x 1 n ) and x 2 = ( x 21 , x 21 , , x 2 n ) , if all components of x 1 are better than x 2 , or some components of x 1 are better than x 2 and the others are equal, there is a dominant relationship between x 1 and x 2 . Here, x 1 is the dominant solution and x 2 is the dominated solution. Otherwise, there is a non-dominant relationship between x 1 and x 2 .

1.2. Related Work

This section can be divided into two parts, including multi-objective swarm and evolutionary algorithms and multi-objective imperialist competitive algorithms.

1.2.1. Multi-Objective Swarm and Evolutionary Algorithms

Swarm and evolutionary algorithms can use the population to search in the optimal direction, so as to make the whole population approach the Pareto front, and finally obtain the approximate Pareto front. There have been several studies about swarm and evolutionary algorithms for solving multi-objective optimization, since Schaffer [1] proposed the vector evaluated genetic algorithm (VEGA). Some well-known algorithms include multiple objective genetic algorithm (MOGA) proposed by Fonseca and Fleming [2], Pareto evolutionary selection algorithm II (PESA-II) proposed by Corne [3], non-dominated sorting in genetic algorithms (NSGA) [4] and non-dominated sorting in genetic algorithms II (NSGA-II) [5] proposed by Deb, multi-objective particle swarm optimization (MOPSO) proposed by Coello [6], multi-objective evolutionary algorithm based on decomposition (MOEA\D) proposed by Q. Zhang [7] and multi-objective artificial bee colony algorithm proposed by Akbari [8].
When solving complex multi-objective optimization problems, the above algorithms may have one or more of the following problems:
(1)
With the increase of the number of objective functions, the proportion of non-dominated solutions in the population also increases, which would lead to the slowing down in the speed of search process;
(2)
For high-dimensional target space, the computational complexity to maintain diversity is too high, and it is difficult to find the adjacent elements of the solution;
(3)
The indexes for evaluating comprehensive performance of the algorithm are poor. Almost all evaluation indexes can only evaluate one of the convergence and distribution of the population in the algorithm; therefore, it is presently difficult to comprehensively evaluate the population convergence and distribution of the swarm and evolutionary algorithms for solving multi-objective optimization;
(4)
For the high-dimensional target space, how to visualize the results is also a difficult problem.
In recent years, many new swarm and evolutionary algorithms and their improved algorithms have also been effectively applied in the process of solving multi-objective optimization. Mirjalili proposed the multi-objective grasshopper optimization algorithm (MOGOA) [9], the multi-objective ant lion optimizer (MOALO) [10] and the multi-objective grey wolf optimizer (MOGWO) [11], respectively. The MOGOA algorithm, based on the grasshopper optimization algorithm (GOA), has been proposed when solving multi-objective optimization. In order to solve multi-objective optimization, an archive and target selection mechanism was introduced into GOA. For most multi-objective optimization, MOGOA is a competitive algorithm with high distribution. In addition, the quality of convergence and distribution is competitive. The MOALO algorithm, based on ant lion optimizer (ALO), has also been proposed for solving multi-objective optimization. The algorithm was tested on 17 case studies, including 5 unconstrained functions, 5 constrained benchmarks and 7 engineering design optimizations. Most of the results achieved have been better than NSGA-II and MOPSO. The MOGWO algorithm, based on the grey wolf optimizer (GWO), is another algorithm proposed to solve multi-objective optimization. In this algorithm, in order to save the non-dominated solutions in the iterative process, a fixed-sized external archive was used. Meanwhile, a grid-based approach was employed to maintain and adaptively assess the Pareto front. After solving CEC 2009 [12] benchmarks, the results of MOGWO were compared with that of MOPSO and MOEA/D. Based on MOGWO, using an adaptive chaotic mutation strategy, a multiple search strategy based on the multi-objective grey wolf optimizer (MMOGWO) [13] has been proposed by Liu. An elitism strategy is also introduced into MMOGWO to search for more potential Pareto optimal solutions and store the diversity of solutions in the approximated solution set. Therefore, MMOGWO is verified by some benchmark functions of multi-objective optimization, and competitive calculation results are obtained. Based on stochastic Fractal Search (SFS), Khalilpourazari [14] proposed multi-objective stochastic Fractal Search (MOSFS) with two new components, including archive and leader selection mechanism. Then, this algorithm was applied in the welded beam design problem, obtaining better results than MOPSO and MOGWO. Got [15] extended the whale optimization algorithm (WOA) and proposed a new multi-objective algorithm called the guided population archive whale optimization algorithm (GPAWOA). This algorithm uses an external archive to store the non-dominated solutions searched in the process of solving the optimization problems. The leaders are selected from the archive to guide the population towards promising regions of the search space; also, the mechanism of crowding distance is incorporated into the WOA to maintain the diversity. The algorithm obtained good results, but there is room for improvement in the initialization. In the future, some new swarm and evolutionary algorithms, including aquila optimizer (AO) [16], reptile search algorithm (RSA) [17], and arithmetic optimization algorithm (AOA) [18], can be improved in order to solve multi-objective optimization.

1.2.2. Multi-Objective Imperialist Competitive Algorithms

Imperialist competitive algorithms (ICA) are a kind of evolutionary algorithm based on the colonies’ competition mechanism of the imperialist method proposed by Atashpaz-Gargar and Lucas [19], which is a kind of social heuristic optimization algorithm. At present, ICA is widely applied in many different fields, including artificial intelligence [20,21], power electronic engineering [22], supply chain management [23,24,25,26], vehicle scheduling [27,28,29] and production process scheduling [30,31,32].
In recent years, there has been some research carried out regarding solving multi-objective optimization problems using ICA and all kinds of modified ICA. Enaytifar [33] proposed the multi-objective imperialist competitive algorithm (MOICA). The main calculation steps of MOICA are strictly carried out according to the ICA algorithm. Therefore, there are some problems, including premature convergence, because empires’ competition can reduce the number of empires, and computing terminates before the number of iterations reaches the maximum. The reason for this is that convergence is too fast, leading to empires dying out in the process of empire competition. Moreover, there are several steps in the MOICA algorithm, and each step has space to improve, including in terms of the search ability and convergence speed. In order to solve these problems, researchers have proposed some form of modified MOICA. Ghasemi [34] proposed a bare-bones multi-objective imperialist competitive algorithm with a modified version (MGBICA). In that paper, a Gaussian bare-bones operator was introduced in empire assimilation in order to enhance the population diversity. Then, MGBICA is applied in the multi-objective optimal electric power planning, namely optimal power flow (OPF) and optimal reactive power dispatch (ORPD) problems. For this algorithm, the other steps, except for assimilation, have modified room. Mohammad [35] improved MOICA, a new step that all countries move to the optimal imperialist; they use this algorithm to design variables of brushless DC motor to maximize efficiency and minimize total mass. For this algorithm, such algorithm design can enhance the convergence speed, but increase the possibility of falling into local optimization. At the same time, it cannot solve the problem that the number of empires may be reduced due to imperialist competition, and the iteration may be terminated before the number of iterations reaches the maximum. Piroozfard [36] designed an improved multi-objective imperialist competitive algorithm to solve multi-objective job shop scheduling optimization problem with low carbon emission. The algorithm obtains good calculation results for the model established in this paper, but the application scope has obvious limitations. When Khanali [37] researched multi-objective energy optimization and environmental emissions for a walnut production system, a new modified MOICA was proposed. This algorithm solved the multi-objective optimization for the walnut production system. The result of the most environmental and economic benefits of energy consumption was obtained. In order to solve flexible job shop scheduling problems with transportation, sequence-dependent setup times (FJSSP-TSDST), which is a complex multi-objective problem, Li [38] proposed a new MOICA named imperialist competitive algorithm with feedback (FICA). This algorithm proposed a new assimilation and adaptive revolution mechanism with feedback. Meanwhile, in order to improve the search ability, a novel competition mechanism is presented by solution transferring among empires.
In addition, some improved ICA algorithms that can only solve single objective optimization have the potential to solve multi-objective optimization problems through continuous improvement. A hybrid algorithm using ICA combining Harris Hawks Optimizer (HHO) [39] was proposed, called Imperialist Competitive Harris Hawks Optimization (ICHHO). This algorithm could solve some common optimization problems. Therefore, 23 benchmarks are calculated, and then the results are compared with ICA and HHO. This hybrid algorithm can obtain better results than two basic algorithms. In order to solve assembly flow shop scheduling problem, Li [40] proposed imperialist competitive algorithm with empire cooperation (ECICA). This algorithm uses a new imperialist competitive method through adaptive empire cooperation between the strongest and weakest empires. Tao [41] presented an improved ICA called a discrete imperialist competitive algorithm (DICA) to solve the resource-constrained hybrid flow-shop problem with energy consumption (RCHFSP-EC). A new decoding method considering the resource allocation was designed in this algorithm. Finally, a series of real shop scheduling system instances are calculated and compared with some other high-quality heuristic algorithms. DICA obtained satisfactory results.

1.3. The Main Content of This Paper

From the above literature on the improvement and application of multi-objective imperialist competitive algorithms, these kind of algorithms have the following three problems. First, most algorithms fail to solve the problem that the number of empires is reduced due to imperialist competition. When the number of empires is one, the calculation would not be carried out, which may lead to the early termination of iterative calculation. Second, in the operation process of each step of all kinds of modified imperialist competitive algorithms, most of the algorithms cannot consider both local search and global search. Third, when solving practical problems, some algorithms have limitations, which are only applicable to the problems to be solved, but not universal.
Therefore, in order to solve the above problems of multi-objective optimization using ICA, this paper proposes a new multi-objective imperialist competitive algorithm, called MOHMICA, based on a modification of the imperialist competitive algorithm, HMICA, in the literature [42].
The scientific contribution of this paper can be divided into the following two aspects, including algorithm theory and the evolution of algorithm performance:
(1)
From the perspective of algorithm theory, this paper proposes a new scheme to solve multi-objective optimization problems based on HMICA. By calculating 12 multi-objective benchmarks and comparing with some high-quality algorithms in recent years, the algorithm proposed in this paper has certain advantages;
(2)
From the perspective of algorithm performance evaluation, this paper proposes a comprehensive evaluation method of multi-objective optimization algorithm by using multiple evaluation metrics.
The second part of this paper will introduce MOHMICA, which is the proposed algorithm in this paper. The third part will introduce the relevant design of numerical simulation in this paper, including performance metrics, comparison algorithms, simulating setting and environment. The fourth part introduces the calculation results and discussion, and the fifth part is the conclusion and future research.

2. The Proposed Algorithm

The steps of MOHMICA include initialization of solutions, the establishment of the initial empires, the development of imperialists and assimilation of colonies, empire interaction, empire revolution, empire competition and external archive.
Among these steps, initialization of solutions, the development of imperialists and assimilation of colonies, empire interaction, and empire revolution are the same as HMICA. In this paper, the steps of the establishment of the initial empires, empire competition and external archive strategy are as follows.

2.1. The Establishment of the Initial Empires

Firstly, generate N initial solutions, namely N countries, using Halton sequences, and then sort these N initial solutions. The rules are as follows:
(1)
The feasible solution is better than the infeasible solution. If both solutions are infeasible solutions, compare the value of the violation function. The smaller the value of the violation function is, the better the solution is;
(2)
If both solutions are feasible solutions, first judge whether there is a dominant relationship between the two solutions. If one solution dominates the other, the dominant solution is the optimal solution and the dominated solution is the inferior solution;
(3)
If the two solutions are mutually non-dominated feasible solutions, arrange the number of dominated solutions of the two solutions in the whole population. The less the number of dominated solutions is, the better the solution is;
(4)
If the two solutions are mutually non-dominated feasible solutions, and the number of dominated solutions of the two solutions is the same in the whole population, the crowding distance is compared. The larger the crowding distance is, the better the solution is. The calculation process of the crowding distance can be seen in the literature [5].
After sorting the countries, they are divided into Nimp empires. Each empire is composed of an imperialist and several colonies, that is, all countries are composed of Nimp imperialists and Ncol colonies. Here, N = N i m p + N c o l . For the top N i m p 1 imperialists, the number of colonies randomly assigned to each imperialist is carried out according to the Formula (2), and the remaining colonies are assigned to the last imperialist.
N C i = r o u n d ( N c o l N i m p ) + { r a n d i ( 0 , 1 ) , when i th imperialist is a non dominated feasible solution 0 , when i th imperialist is a dominated feasible solution 1 , when i th imperialist is an infeasible solution
where, NCi means the number of colonies allocated to the i-th imperialist. r o u n d ( ) is an integer closest to . r a n d i ( 0 , 1 ) is a random number of 0 or 1.
Allocating colonies such as this can avoid the disadvantage that the calculation formula of empires’ power in the basic ICA cannot be used in multi-objective optimization and simplify the steps of colony allocation. Meanwhile, when, N i m p 2 < N c o l , it ensures that each imperialist can be assigned to at least one colony.

2.2. Empire Competition

Competition among empires is a process of redistribution of the colonies owned by each empire. The steps are as follows:
Step 1. Compare the quality of each empire and rank them to find out the strongest empire and the weakest empire.
Step 2. If the weakest empire has colonies, find the weakest colony in the weakest empire as the annexed country. If there are no colonies in the weakest empire, the imperialist will be annexed by other empires.
Step 3. Randomly put the annexed countries into other empires.
The rules for ranking the strength of the empire are as follows:
(1)
Comparing the number of infeasible solutions in each empire, where the empire with a smaller number is better;
(2)
If the number of infeasible solutions of the two empires is the same, compare the number of dominated solutions. The lower the number of dominated solutions, the better empire is;
(3)
If the above two are the same, compare the average crowding distance of each empire, where the larger the crowding distance is, the stronger empire is.

2.3. External Archive Strategy

When solving multi-objective optimization, it is necessary to compare the quality of solutions by using the distribution indexes, such as crowding distance, because non-dominated solutions cannot be directly compared. Since a certain number of non-dominated solutions would be generated in each iteration, in order to prevent these non-dominated solutions generated in each iteration from losing in the next iteration, it is necessary to establish an external archive, which could store these non-dominated solutions, merge the non-dominated solutions obtained in each iteration and delete the duplicate or dominated individuals in the external archive. Finally, the elite individuals in the calculation process are retained. The specific process of archiving strategy in this paper is as follows:
Step 1. Arrange the non-dominated solutions obtained in each iteration according to the crowding distance, place into the external archive and delete the duplicate solutions in the external archive;
Step 2. Update the external archive. Recalculate the number of dominated solutions and crowding distance of each solution in the external archive, and define the crowding distance of the D solutions with the minimum value in any specific sub vector as positive infinity. D is the number of objective functions;
Step 3. Delete the dominated solutions of the updated external archive and sort them by crowding distance. If the number of non-dominated solutions is larger than the maximum size of the external archive at this time, the part beyond the maximum size of the external archive will be deleted. In particular, in order to preserve more possible elite solutions, the size of the external archive can be enlarged to a certain extent, for example, twice the population;
Step 4. Find the country that the number of dominated solutions is the largest in all colonies, and replace the colony with the solution with the largest crowding distance in the external archive (excluding two solutions that the crowding distances are positive infinite), and then carry out the next iteration.

2.4. Implementation of the Proposed Algorithm

After the improvement of the hybrid method, the pseudo code of MOHMICA is obtained (Algortithm 1), as shown below.
Algortithm 1: Pseudocode of MOHMICA
Input:
Population total number N
The number of initial imperialists Nimp and colonies Ncol
The number of optimization iterations MaxIt, archive size EA
Output: MOHMICA Pareto front
1Initialize the MOHMICA population postions by Halton sequence
2for i = 1: N do
3   Calculate the function values, violation values (if the optimization with constraints) the number of dominated solutions and crowding distance of the initial countries.
4   Sort initial solutions according to the sorting rules in the Section 2.1.
5   Create empires: according to the clonies allocating rules in the Section 2.1.
6end for
7while  t M a x I t  do
8   for i = 1:N do
9         The development of imperialists and the assimilation of colonies: according to literature [42].
10         Calculate the function values, violation values (if the optimization with constraints) the number of dominated solutions and crowding distance of the initial countries.
11        Empire interaction: according to literature [42].
12        Calculate the function values, violation values (if the optimization with constraints) the number of dominated solutions and crowding distance of the initial countries.
13        Empire revolution: according to literature [42].
14        Calculate the function values, violation values (if the optimization with constraints) the number of dominated solutions and crowding distance of the initial countries.
15        Empire interaction: according to literature [42].
16        Empire competition: according to the Section 2.2 of this paper.
17        Update external archive: according to the Section 2.3 of this paper.
18end for

3. Experimental Design

This part will introduce the benchmark functions calculated in this paper, performance metrics, comparison algorithms, simulating setting and environment.

3.1. Benchmark Functions

In order to verify the effectiveness of the algorithm proposed in this paper, 12 benchmark functions are calculated by MOHMICA, including SCH [5], FON [5], ZDT1-ZDT4 in ZDT [5] series, and 6 benchmarks in UF of CEC 2009. Among them, UF8 and UF10 are three objective functions and the other benchmarks are double objective functions. The mathematical expressions of all benchmarks are shown in Table 1.

3.2. Performance Metrics

In order to evaluate the convergence and distribution of solutions, this paper uses four metrics: convergence metric (CM), diversity metric (DM), generational distance (GD) and inverted generational distance (IGD). The introduction of these four indicators is as follows.
(1)
Convergence metric
This metric reflects the distance between the approximate Pareto front and the real Pareto front. The smaller the value is, the closer the individual of the solutions is to the real Pareto front, and the better its convergence is. The calculating method is as shown in Equation (3):
C M ( P F , P F * ) = 1 n n d i = 1 n n d ( min P F i , P F * )
where, PF is the calculated approximate Pareto front. PF* is real Pareto front. nnd is the number of non-dominated solutions. means Euclidean distance. Particularly, if CM = 0, that means the calculated Pareto front is true Pareto front.
(2)
Diversity metric
This metric is used to measure the distribution of non-dominated solutions. The smaller its value is, the better distribution of non-dominated solutions is. The calculation method is shown in Equation (4).
D M ( P F , P F * ) = d f + d l + i = 1 n n d 1 | P F i , P F i + 1 i = 1 n n d 1 P F i , P F i + 1 n n d 1 | d f + d l + i = 1 n n d 1 P F i , P F i + 1
where, df and dl are the Euclidean distance between the extreme non-dominated solution and the boundary solutions of the obtained non-dominated solution set.
(3)
Generational distance
This metric refers to the distance between the whole approximate Pareto front obtained by the algorithm and the real Pareto front. The smaller the GD is, the closer solutions are to the real Pareto front, and the better the convergence of the algorithm. The calculation method of this metric is shown in Equation (5):
G D ( P F , P F * ) = 1 n n d i = 1 n n d ( min P F i , P F * ) 2
(4)
Inverted generational distance
This metric refers to the distance between the real Pareto front and the approximate Pareto front obtained by the algorithm. To some extent, it is a comprehensive metric that can measure both convergence and diversity of an algorithm. The smaller the IGD, the better quality of algorithm is. The calculation method of IGD is shown in Equation (6):
I G D ( P F , P F * ) = 1 n P F i = 1 n P F min P F i * , P F
where, n P F is the number of points of real Pareto front.

3.3. Comparison Algorithm and Simulation Setting

In this paper, each benchmark function is run independently 20 times by using the MOHMICA algorithm, and then compared with some multi-objective algorithms that have achieved good results in solving these kind of problems in recent years, including PESA-II, MOEA\D, NSGA-II, MOABC, MOALO, MOGOA and MMOGWO. In particular, the related parameter settings of PESA-II and MOEA\D are the same as in [3,7]. The related data of the other algorithms are from [13].
The simulation environment is Windows 10, Intel® Core (TM) i7-10875H CPU @ 2.30 GHz with a 16.00 GB RAM memory with a running environment of MATLAB 2017b.
The initial population size of the MOHMICA algorithm is set to 100, and the size of the external archive is set to 200. For SCH and FON, the maximum number of iterations is 50, meaning the maximum number of evaluations is 5000. The maximum number of iterations of other two objective functions is 250, that is, the maximum number of evaluations is 25,000. For the three objective benchmark functions, the maximum number of iterations is 500, that is, the maximum number of evaluations is 50,000. In order to ensure that the comparison results of different algorithms are fair when calculating the same function, the population number, maximum iteration times and maximum evaluation times of all comparison algorithms are the same as those of MOHMICA.

4. Results and Discussion

4.1. Calculation Results and Discussion of Benchmark Functions

The results of MOHMICA and other comparison algorithms are shown in Table 2, Table 3, Table 4 and Table 5. These four tables show the mean value and standard deviation (SD) of CM, DM, GD and IGD respectively. Meanwhile, the ranking of mean values of each algorithm are counted in these tables. Then, the results of average of ranking values of four metrics on each algorithm are in Table 6. From this table, MOHMICA ranked first more than all the other algorithms.
In Table 2, Table 3, Table 4, Table 5 and Table 6, some rules about relevant metrics can be obtained when calculating each benchmark function of MOHMICA. For the convergence metrics including CM and GD, MOHMICA has an obvious advantage in general. For the distribution metric DM, the amount of times that MOHMICA ranked first was the most among all algorithms. For the metric of IGD, the comprehensive ranking of the algorithm proposed in this paper was slightly lower than MOALO and MMOGWO, but significantly higher than other algorithms. The reason for this is the low ranking of SCH and UF2 functions. On the whole, the more complex a benchmark function is, the better the result obtained by MOHMICA is. The results of all benchmark functions calculated by different algorithms from Table 2, Table 3, Table 4 and Table 5 can be quantitatively verified by the Wilcoxon test on the four metrics of each algorithm. This test is conducted with three levels of significance, namely, α = 0.01 , α = 0.05 and α = 0.1 . The statistical hypotheses for the Wilcoxon test are as follows:
(1)
H0: The results of the two algorithms are homogenous;
(2)
H1: The results of the two algorithms are heterogenous.
According to the results of the Wilcoxon test in Table 7, the conclusions that can be obtained as follows:
(1)
From the perspective of R+, MOHMICA has advantages over the other algorithms. Moreover, most of the results can pass the level of significance of α = 0.01 ;
(2)
For the convergence metric CM, only two comparing algorithms, including MOGOA and MMOGWO, cannot pass the level of significance of α = 0.01 , but can pass the level of significance of α = 0.05 . For the other convergence metric GD, the performance of MOHMICA is worse than that of CM, with three algorithms including MOALO, MOGOA and MMOGWO falling the level of significance of α = 0.01 . Moreover, the latter two cannot pass the level of significance of α = 0.1 , although MOHMICA has advantages over them;
(3)
For the distribution metric DM, except PESA-II failing to achieve the level of significance of α = 0.1 , MOHMICA outperforms other algorithms with a level of significance of α = 0.01 ;
(4)
For the comprehensive metric IGD, MOHMICA has some advantages over MOALO and MOGOA, but these are not significant. The results of MOHMICA and MMOGWO are equal. It has obvious advantages over other algorithms with a level of significance of α = 0.05 .
In order to show the advantages of MOHMICA in calculating these benchmark functions intuitively, the approximate Pareto front of each benchmark function calculated by MOHMICA, PESA-II, MOEA\D are compared with the real Pareto front, as shown from Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12. It is easy to see that the MOHMICA algorithm has obvious advantages in benchmark functions from these function images.

4.2. A New Method for Evaluating Multi-Objective Optimization Algorithm

For the common metrics evaluating the quality of multi-objective optimization algorithms at present, CM, DM, GD and IGD all have some limitations. Specifically, CM and GD are convergence metrics from different perspectives. DM is a metric to evaluate the distribution of solutions in the approximate Pareto front. Although IGD is generally considered to be a comprehensive evaluation metric that can take into account the convergence and distribution of the solutions, it also has some limitations. On the one hand, a different number of the sampling points on the real Pareto front may affect the results of IGD; on the other hand, for those optimization problems with more than three objective functions, the convergence and distribution of the solutions obtained by algorithms cannot be seen from IGD because those solutions cannot be expressed visually. Therefore, it is of some theoretical significance to combine multiple metrics representing the convergence and distribution of the solutions of multi-objective optimization algorithms and propose a comprehensive evaluation method that can be expressed visually. The specific methods are as follows.
Firstly, each metric result of benchmark functions calculated by different algorithms is processed by logarithm. The specific calculation method is shown in Equation (7):
w = u lg v
In Equation (7), v represents the mean value of CM, DM, GD and IGD, respectively. u = | [ lg v ] min | + 1 , where [ ] represents the integer of . w is logarithmic processed data. Then, draw the radar map of each benchmark functions using w C M , w D M , w G D and w I G D of different algorithms, as shown in Figure 13, Figure 14, Figure 15 and Figure 16. The drawing method of radar maps is as follows. Starting from the origin point, the length of w C M , w D M , w G D and w I G D are the half diagonal respectively. w C M and w D M forms a diagonal of the quadrilateral of the radar map, because these two are the metrics that directly characterize convergence degree and distribution degree of the approximate Pareto front, respectively. w G D and w I G D constitutes another diagonal, because these two metrics represent the distance from the approximate Pareto fronts obtained by different algorithms to real Pareto fronts and the distance from real Pareto fronts to the approximate Pareto fronts obtained by different algorithms. The larger the area of the radar map is, the better the comprehensive result of the benchmark function obtained by each algorithm. For the 12 benchmark functions calculated by MOHMICA and comparing other algorithms in this paper, the larger the average area of the 12 radar maps of each algorithm, the stronger the comprehensive ability to calculate the multi-objective optimization problems. Moreover, from the actual value after logarithmic transformation, when the radar map areas of two algorithms calculating the same benchmark function, there is little performance difference between different algorithms. The calculation results are shown in Table 8.
From the results in Table 8, comparing with the average area of radar maps of different algorithms in this paper, MOHMICA is the largest, being at least 14.06% larger than the total area of other algorithms. It shows that the comprehensive ability of MOHMICA is also the strongest when calculating benchmark functions. Meanwhile, the number of times the radar maps with the largest area of MOHMICA is the most among all algorithms.

5. Conclusions and Future Research

This paper aimed to address the shortcomings of HMICA that can only solve single-objective optimization problems and proposes the MOHMICA algorithm. In order to adapt to the characteristics of multi-objective optimization problems, MOHMICA updates the colony allocation strategy during the empire creation on the basis of HMICA, and increases the step of external archive.
In order to verify the performance of MOHMICA, this paper calculated 12 common benchmark functions, including 10 bi-objective benchmarks and 2 tri-objective benchmarks. Then, seven high-quality algorithms were compared to the proposed algorithm using four metrics: CM, DM, GD and IGD. After ranking and performing the Wilcoxon test, the proposed algorithm was found to have certain advantages over other algorithms for most metrics, but it is not enough to prove that the algorithm proposed in this paper has obvious advantages for each function. Therefore, a new comprehensive evaluating method called “radar map method” is proposed as the other knowledge contribution of this paper, which is used to evaluate comprehensive ability, including that of convergence and distribution of the approximate Pareto fronts obtained by different algorithms. The coordinate axis of the radar map includes CM, DM, GD and IGD. After evaluating algorithms that compare with MOHMICA using the radar map method, the comprehensive ability of MOHMICA was found to be the best among all algorithms.
For future research, there are three problems recommended to improve upon. First, in order to make the Pareto front distribution better than the algorithm proposed in this paper, when solving the optimization problem with more than two objective functions, the external archive strategy may need to be further improved. Second, in order to reduce time consumption and complexity when using MOHMICA to solve optimization problems, the operators in some of the steps may need to be replaced with simpler operators. Lastly, the application field needs to be considered. Using MOHMICA to solve real-world problems, including vehicle routing, industrial production management and production process scheduling optimization, are also important to explore in future research.

Author Contributions

Conceptualization, J.L.; methodology, J.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L.; discussion, J.L., J.Z., X.J. and H.L.; supervision, J.Z.; project administration, J.Z.; funding acquisition, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Any information does not involve this endorsement.

Informed Consent Statement

This article does not contain any research on humans.

Data Availability Statement

The relevant data of this paper have been shown in the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schaffer, J.D. Multiple objective optimization with vector evaluated genetic algorithms. In Proceedings of the first international conference on genetic algorithms. In Proceedings of the 1st International Conference on Genetic Algorithms, Pittsburgh, PA, USA, 1 July 1985; Lawrence Erlbaum: Hillsdale, NJ, USA, 1985; pp. 93–100. [Google Scholar]
  2. Fonseca, C.; Fleming, P. Genetic algorithms for multiobjective optimization: Formulation discussion and generalization. In Proceedings of the Fifth International Conference on Genetic Algorithms, Urbana, IL, USA, 1 June 1993; Morgan Kauffman Publishers: San Francisco, CA, USA, 1993; pp. 34–44. [Google Scholar]
  3. Corne, D.W.; Jerram, N.R.; Knowles, J. PESA-II: Region-based selection in evolutionary multiobjective optimization. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), San Francisco, CA, USA, 7–11 July 2001. [Google Scholar]
  4. Srinivas, N.; Deb, K. Multiobjective optimization using non-dominated sorting in genetic algorithms. IEEE Trans. Evol. Comput. 1994, 2, 221–248. [Google Scholar]
  5. Deb, K.; Pratap, A.; Agarwal, S. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  6. Coello, C.A.C.; Pulido, G.T.; Lechuga, M.S. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 256–279. [Google Scholar] [CrossRef]
  7. Zhang, Q.; Li, H. MOEA\D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  8. Akbari, R.; Hedayatzadeh, R. A multi-objective artificial bee colony algorithm. Swarm Evol. Comput. 2012, 2, 39–52. [Google Scholar] [CrossRef]
  9. Mirjalili, S. Grasshopper optimization algorithm for multi-objective optimization problems. Appl. Intell. 2018, 48, 805–820. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Jangir, P. Multi-objective ant lion optimizer: A multi-objective optimization algorithm for solving engineering problems. Appl. Intell. 2017, 46, 79–95. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Saremi, S.; Mirjalili, S. Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. Expert Syst. Appl. 2016, 47, 106–119. [Google Scholar] [CrossRef]
  12. Zhang, Q.; Zhou, A.; Zhao, S.; Suganthan, P.N.; Liu, W.; Tiwari, S. Multiobjective optimization Test Instances for the CEC 2009 Special Session and Competition. Mech. Eng. 2008, 57, 722–748. [Google Scholar]
  13. Liu, J.; Yang, Z.; Li, D. A multiple search strategies based grey wolf optimizer for solving multi-objective optimization problems. Expert Syst. Appl. 2020, 145, 113134. [Google Scholar] [CrossRef]
  14. Khalilpourazari, S.; Naderi, B.; Khalilpourazary, S. Multi-Objective Stochastic Fractal Search: A powerful algorithm for solving complex multi-objective optimization problems. Soft. Comput. 2020, 24, 3037–3066. [Google Scholar] [CrossRef]
  15. Got, A.; Moussaoui, A.; Zouache, D. A guided population archive whale optimization algorithm for solving multiobjective optimization problems. Expert Syst. Appl. 2020, 141, 112972. [Google Scholar] [CrossRef]
  16. Abualigah, L.; Yousri, D.; Elaziz, M.A.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization Algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  17. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  18. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  19. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar] [CrossRef]
  20. Aliniya, Z.; Keyvanpour, M.R. CB-ICA: A crossover-based imperialist competitive algorithm for large-scale problems and engineering design optimization. Neural Comput. Appl. 2019, 31, 7549–7570. [Google Scholar] [CrossRef]
  21. Iyer, V.H.; Mahesh, S.; Malpani, R.; Sapre, M.; Kulkarni, A. Adaptive Range Genetic Algorithm: A hybrid optimization approach and its application in the design and economic optimization of Shell-and-Tube Heat Exchanger. Eng. Appl. Artif. Intell. 2019, 85, 444–461. [Google Scholar] [CrossRef]
  22. Elsisi, M. Design of neural network predictive controller based on imperialist competitive algorithm for automatic voltage regulator. Neural Comput. Appl. 2019, 31, 5017–5027. [Google Scholar] [CrossRef]
  23. Arya, Y. Impact of ultra-capacitor on automatic generation control of electric energy systems using an optimal FFOID controller. Int. J. Energy Res. 2019, 43, 8765–8778. [Google Scholar] [CrossRef]
  24. Hosseinzadeh, A.Z.; Razzaghi, S.R.S.; Amiri, G.G. An iterated IRS technique for cross-sectional damage modelling and identification in beams using limited sensors measurement. Inverse Probl. Sci. Eng. 2019, 27, 1145–1169. [Google Scholar] [CrossRef]
  25. Hajiaghaei-Keshteli, M.; Fard, A.M.F. Sustainable closed-loop supply chain network design with discount supposition. Neural Comput. Appl. 2019, 31, 5343–5377. [Google Scholar] [CrossRef]
  26. Karimi, B.; Hassanlu, M.G.; Niknamfar, A.H. An integrated production-distribution planning with a routing problem and transportation cost discount in a supply chain. Assem. Autom. 2019, 39, 783–802. [Google Scholar] [CrossRef]
  27. Fakhrzad, M.B.; Goodarzian, F. A Fuzzy Multi-Objective Programming Approach to Develop a Green Closed-Loop Supply Chain Network Design Problem under Uncertainty: Modifications of Imperialist Competitive Algorithm. RAIRO Res. Oper. 2019, 53, 963–990. [Google Scholar] [CrossRef] [Green Version]
  28. Gharib, M.; Mohammad, S. A dynamic dispatching problem to allocate relief vehicles after a disaster. Eng Optimiz. 2020, 53, 1999–2016. [Google Scholar] [CrossRef]
  29. Marandi, F.; Fatemi Ghomi, S.M.T. Integrated multi-factory production and distribution scheduling applying vehicle routing approach. Int. J. Prod. Res. 2019, 57, 722–748. [Google Scholar] [CrossRef]
  30. Wang, S.; Liu, G.; Gao, S. A hybrid discrete imperialist competition algorithm for fuzzy job-shop scheduling problems. IEEE Access 2017, 7, 9320–9331. [Google Scholar] [CrossRef]
  31. Zhang, H. Balancing Problem of Stochastic Large-Scale U-Type Assembly Lines Using a Modified Evolutionary Algorithm. IEEE Access 2018, 6, 78414–78424. [Google Scholar] [CrossRef]
  32. Lei, D.; Li, M.; Wang, L. A two-phase meta-heuristic for multi-objective flexible job shop scheduling problem with total energy consumption threshold. IEEE Trans. Cybern. 2018, 49, 1097–1109. [Google Scholar] [CrossRef]
  33. Enayatifar, R.; Yousefi, M.; Abdullah, A.H. MOICA: A novel multi-objective approach based on imperialist competitive algorithm. Appl. Math. Comput. 2013, 219, 8829–8841. [Google Scholar] [CrossRef]
  34. Ghasemi, M.; Ghavidel, S.; Ghanbarian, M.M. Multi-objective optimal electric power planning in the power system using Gaussian bare-bones imperialist competitive algorithm. Inform. Sci. 2015, 294, 286–304. [Google Scholar] [CrossRef]
  35. Mohammad, A.; Hamed, M. Multi-Objective Modified Imperialist Competitive Algorithm for Brushless DC Motor Optimization. IETE J. Res. 2019, 65, 96–103. [Google Scholar] [CrossRef]
  36. Piroozfard, H.; Wong, K.Y.; Tiwari, M.K. Reduction of carbon emission and total late work criterion in job shop scheduling by applying a multi-objective imperialist competitive algorithm. Int. J. Comput. Int. Syst. 2018, 11, 805. [Google Scholar] [CrossRef] [Green Version]
  37. Khanali, M.; Akram, A.; Behzadi, J. Multi-objective optimization of energy use and environmental emissions for walnut production using imperialist competitive algorithm. Appl. Energy 2021, 284, 116342. [Google Scholar] [CrossRef]
  38. Li, M.; Lei, D. An imperialist competitive algorithm with feedback for energy-efficient flexible job shop scheduling with transportation and sequence-dependent setup times. Eng. Appl. Artif. Intell. 2021, 103, 104307. [Google Scholar] [CrossRef]
  39. Kaveh, A.; Rahmani, P.; Eslamlou, D. An efficient hybrid approach based on Harris Hawks optimization and imperialist competitive algorithm for structural optimization. Eng. Comput. 2021, 1–29. [Google Scholar] [CrossRef]
  40. Li, M.; Su, B.; Lei, D. A novel imperialist competitive algorithm for fuzzy distributed assembly flow shop scheduling. J. Intell. Fuzzy Syst. 2021, 40, 4545–4561. [Google Scholar] [CrossRef]
  41. Tao, X.-R.; Li, J.-Q.; Huang, T.-H.; Duan, P. Discrete imperialist competitive algorithm for the resource-constrained hybrid flowshop problem with energy consumption. Complex Intell. Syst. 2021, 7, 311–326. [Google Scholar] [CrossRef]
  42. Luo, J.; Zhou, J.; Jiang, X. A modification of the imperialist competitive algorithm with hybrid methods for constrained optimization problems. IEEE Access 2021, 9, 161745–161760. [Google Scholar] [CrossRef]
Figure 1. Pareto frontiers of SCH benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 1. Pareto frontiers of SCH benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g001
Figure 2. Pareto frontiers of FON benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 2. Pareto frontiers of FON benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g002
Figure 3. Pareto frontiers of ZDT1 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 3. Pareto frontiers of ZDT1 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g003
Figure 4. Pareto frontiers of ZDT2 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 4. Pareto frontiers of ZDT2 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g004
Figure 5. Pareto frontiers of ZDT3 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 5. Pareto frontiers of ZDT3 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g005
Figure 6. Pareto frontiers of ZDT4 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 6. Pareto frontiers of ZDT4 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g006
Figure 7. Pareto frontiers of UF1 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 7. Pareto frontiers of UF1 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g007
Figure 8. Pareto frontiers of UF2 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 8. Pareto frontiers of UF2 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g008
Figure 9. Pareto frontiers of UF3 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 9. Pareto frontiers of UF3 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g009
Figure 10. Pareto frontiers of UF7 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 10. Pareto frontiers of UF7 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g010
Figure 11. Pareto frontiers of UF8 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 11. Pareto frontiers of UF8 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g011
Figure 12. Pareto frontiers of UF10 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Figure 12. Pareto frontiers of UF10 benchmark function obtained by MOHMICA, PESA-II and MOEA\D.
Symmetry 14 00173 g012
Figure 13. Comprehensive evaluation radar maps of SCH function (left), FON function (center) and ZDT1 function (right) calculated by eight different algorithms.
Figure 13. Comprehensive evaluation radar maps of SCH function (left), FON function (center) and ZDT1 function (right) calculated by eight different algorithms.
Symmetry 14 00173 g013
Figure 14. Comprehensive evaluation radar maps of ZDT2 function (left), ZDT3 function (center) and ZDT4 function (right) calculated by eight different algorithms.
Figure 14. Comprehensive evaluation radar maps of ZDT2 function (left), ZDT3 function (center) and ZDT4 function (right) calculated by eight different algorithms.
Symmetry 14 00173 g014
Figure 15. Comprehensive evaluation radar maps of UF1 function (left), UF2 function (center) and UF3 function (right) calculated by eight different algorithms.
Figure 15. Comprehensive evaluation radar maps of UF1 function (left), UF2 function (center) and UF3 function (right) calculated by eight different algorithms.
Symmetry 14 00173 g015
Figure 16. Comprehensive evaluation radar maps of UF7 function (left), UF8 function (center) and UF10 function (right) calculated by eight different algorithms.
Figure 16. Comprehensive evaluation radar maps of UF7 function (left), UF8 function (center) and UF10 function (right) calculated by eight different algorithms.
Symmetry 14 00173 g016
Table 1. The mathematical expressions of all benchmarks.
Table 1. The mathematical expressions of all benchmarks.
Function NameMathematical ExpressionsDimensionsBounds
SCH f 1 = x 2 , f 2 = ( x 2 ) 2 1 [ 0 , 2 ]
FON f 1 = 1 exp [ i = 1 3 ( x i 1 3 ) 2 ] , f 2 = 1 exp [ i = 1 3 ( x i + 1 3 ) 2 ] 3 x i [ 4 , 4 ]
ZDT1 f 1 ( x ) = x 1 , f 2 ( x ) = g ( x ) [ 1 x 1 g ( x ) ] , g ( x ) = 1 + 9 n 1 i = 2 n x i 30 x i [ 0 , 1 ]
ZDT2 f 1 ( x ) = x 1 , f 2 ( x ) = g ( x ) { 1 [ x 1 g ( x ) ] 2 } , g ( x ) = 1 + 9 n 1 i = 2 n x i 30 x i [ 0 , 1 ]
ZDT3 f 1 ( x ) = x 1 , f 2 ( x ) = g ( x ) [ 1 x 1 g ( x ) x 1 g ( x ) sin ( 10 π x 1 ) ] , g ( x ) = 1 + 9 n 1 i = 2 n x i 30 x i [ 0 , 1 ]
ZDT4 f 1 ( x ) = x 1 , f 2 ( x ) = g ( x ) [ 1 x 1 g ( x ) ] , g ( x ) = 1 + 10 ( n 1 ) + i = 2 n [ x i 2 10 cos ( 4 π x i ) ] 10 x i [ 0 , 1 ]
UF1 f 1 = x 1 + 2 | J 1 | j J 1 [ x j sin ( 6 π x 1 + j π n ) ] 2 , f 2 = 1 x 1 + 2 | J 2 | j J 2 [ x j sin ( 6 π x 1 + j π n ) ] 2 J 1 = { j | j is odd and 2 j n } , J 2 = { j | j is even and 2 j n } 30 [ 0 , 1 ] × [ 1 , 1 ] n 1
UF2 f 1 = x 1 + 2 | J 1 | j J 1 y j 2 , f 2 = 1 x 1 + 2 | J 2 | j J 2 y j 2 J 1 = { j | j is odd and 2 j n } , J 2 = { j | j is even and 2 j n } y j = { x j [ 0.3 x 1 2 cos ( 24 π x 1 + 4 j π n ) + 0.6 x 1 ] cos ( 6 π x 1 + j π n ) , j J 1 x j [ 0.3 x 1 2 cos ( 24 π x 1 + 4 j π n ) + 0.6 x 1 ] sin ( 6 π x 1 + j π n ) , j J 2 30 [ 0 , 1 ] × [ 1 , 1 ] n 1
UF3 f 1 = x 1 + 2 | J 1 | [ 4 j J 1 y j 2 2 j J 1 cos ( 20 y j π j ) + 2 ] , f 2 = 1 x 1 + 2 | J 2 | [ 4 j J 2 y j 2 2 j J 2 cos ( 20 y j π j ) + 2 ] J 1 = { j | j is odd and 2 j n } , J 2 = { j | j is even and 2 j n } y j = x j x 1 0.5 [ 1.0 + 3 ( j 2 ) n 2 ] , j = 2 , , n 30 [ 0 , 1 ] n
UF7 f 1 = x 1 + 2 | J 1 | j J 1 y j 2 , f 2 = 1 x 1 + 2 | J 2 | j J 2 y j 2 J 1 = { j | j is odd and 2 j n } , J 2 = { j | j is even and 2 j n } y j = x j sin ( 6 π x 1 + j π n ) , j = 2 , , n 30 [ 0 , 1 ] × [ 1 , 1 ] n 1
UF8 f 1 ( x ) = cos ( 0.5 π x 1 ) cos ( 0.5 π x 2 ) + 2 | J 1 | j J 1 ( x j 2 x 2 sin ( 2 π x 1 + j π n ) ) 2 f 2 ( x ) = cos ( 0.5 π x 1 ) sin ( 0.5 π x 2 ) + 2 | J 2 | j J 2 ( x j 2 x 2 sin ( 2 π x 1 + j π n ) ) 2 f 3 ( x ) = sin ( 0.5 π x 1 ) + 2 | J 3 | j J 3 ( x j 2 x 2 sin ( 2 π x 1 + j π n ) ) 2 J 1 = { j | 3 j n , and j 1 is a multiplication of 3 } J 2 = { j | 3 j n , and j 2 is a multiplication of 3 } J 3 = { j | 3 j n , and j is a multiplication of 3 } 30 [ 0 , 1 ] 2 × [ 1 , 1 ] n 2
UF10 f 1 ( x ) = cos ( 0.5 π x 1 ) cos ( 0.5 π x 2 ) + 2 | J 1 | j J 1 [ 4 y j 2 cos ( 8 π y j ) + 1 ] f 2 ( x ) = cos ( 0.5 π x 1 ) sin ( 0.5 π x 2 ) + 2 | J 2 | j J 1 [ 4 y j 2 cos ( 8 π y j ) + 1 ] f 3 ( x ) = sin ( 0.5 π x 1 ) + 2 | J 3 | j J 1 [ 4 y j 2 cos ( 8 π y j ) + 1 ] J 1 = { j | 3 j n , and j 1 is a multiplication of 3 } J 2 = { j | 3 j n , and j 2 is a multiplication of 3 } J 3 = { j | 3 j n , and j is a multiplication of 3 } y j = x j 2 x 2 sin ( 2 π x 1 + j π n ) , j = 3 , , n 30 [ 0 , 1 ] 2 × [ 2 , 2 ] n 2
Table 2. The results of convergence metric (CM) for all benchmark functions.
Table 2. The results of convergence metric (CM) for all benchmark functions.
Benchmark FunctionsMOHMICAPESA-IIMOEA\DNSGA-IIMOABCMOALOMOGOAMMOGWO
SCHMean1.328 × 1031.373 × 10−32.870 × 10−38.03 × 1038.38 × 1037.40 × 1038.28 × 1038.18 × 103
SD1.332 × 1042.441 × 10−43.546 × 10−35.41 × 1044.72 × 1041.27 × 1036.96 × 1046.85 × 104
Rank12358476
FONMean2.706 × 1032.249 × 10−33.182 × 10−39.97 × 1033.65 × 1021.11 × 1029.23 × 1021.06 × 102
SD2.154 × 1042.380 × 10−42.199 × 10−33.87 × 1048.89 × 10−32.08 × 1031.12 × 1028.29 × 104
Rank21347685
ZDT1Mean2.639 × 10−37.745 × 10−26.369 × 10−24.61 × 10−22.94 × 10−15.04 × 10−37.79 × 10−21.23 × 10−3
SD1.075 × 10−31.731 × 10−27.270 × 10−24.33 × 10−25.59 × 10−29.67 × 10−32.33 × 10−14.01 × 10−4
Rank26548371
ZDT2Mean2.341 × 10−31.253 × 10−18.943 × 10−17.52 × 10−23.05 × 10−15.40 × 10−44.02 × 10−38.52 × 10−4
SD6.979 × 10−42.924 × 10−24.823 × 10−14.28 × 10−27.19 × 10−27.52 × 10−56.95 × 10−31.06 × 10−4
Rank36857142
ZDT3Mean3.965 × 10−37.376 × 10−28.962 × 10−15.31 × 10−21.87 × 10−17.67 × 10−33.83 × 10−24.69 × 10−4
SD3.885 × 10−41.550 × 10−27.403 × 10−15.42 × 10−25.94 × 10−23.27 × 10−36.39 × 10−26.19 × 10−4
Rank26857341
ZDT4Mean2.003 × 10−32.5151.0117.082.25 × 10−520.115.34.25
SD2.899 × 10−41.6135.481 × 10−12.858.90 × 10−15.243.37 × 10−14.15
Rank14263875
UF1Mean3.810 × 10−23.8143.9822.22 × 10−17.95 × 10−26.76 × 10−29.04 × 10−24.43 × 10−2
SD8.746 × 10−31.990 × 10−13.816 × 10−19.24 × 10−22.10 × 10−25.15 × 10−23.65 × 10−23.80 × 10−2
Rank17864352
UF2Mean4.716 × 10−27.390 × 10−26.105 × 10−27.92 × 10−24.12 × 10−21.23 × 10−12.21 × 10−25.13 × 10−2
SD9.735 × 10−31.487 × 10−22.064 × 10−22.51 × 10−27.31 × 10−34.25 × 10−22.47 × 10−21.44 × 10−2
Rank36572814
UF3Mean1.112 × 1011.8794.1223.11 × 10−13.39 × 10−12.15 × 10−11.72 × 10−12.54 × 10−1
SD1.469 × 1011.2159.176 × 10−18.20 × 10−26.92 × 10−28.63 × 10−24.74 × 10−26.05 × 10−2
Rank17856324
UF7Mean3.172 × 1023.913 × 10−24.450 × 10−22.54 × 10−17.08 × 10−25.46 × 10−23.33 × 10−22.15 × 10−2
SD1.074 × 1021.636 × 10−22.496 × 10−21.55 × 10−12.30 × 10−24.69 × 10−21.91 × 10−25.04 × 10−3
Rank35628741
UF8Mean6.497 × 10−29.886 × 10−16.195 × 10−14.652.59 × 10−21.91 × 10−14.53 × 10−11.96
SD4.238 × 10−27.686 × 10−14.503 × 10−19.59 × 10−11.83 × 10−21.42 × 10−15.97 × 10−17.10 × 10−1
Rank26581347
UF10Mean2.562 × 10−137.7525.4412.76.68 × 10−13.462.484.79
SD5.884 × 10−212.3113.112.623.40 × 10−17.75 × 10−13.40 × 10−11.86
Rank18732546
Table 3. The results of diversity metric (DM) for all benchmark functions.
Table 3. The results of diversity metric (DM) for all benchmark functions.
Benchmark FunctionsMOHMICAPESA-IIMOEA\DNSGA-IIMOABCMOALOMOGOAMMOGWO
SCHMean8.881 × 1017.445 × 10−11.0044.14 × 1019.35 × 1011.531.059.68 × 101
SD5.480 × 1028.738 × 10−18.104 × 10−14.43 × 1027.03 × 1021.11 × 1016.94 × 1021.35 × 101
Rank42613875
FONMean2.300 × 1019.964 × 10−18.855 × 10−13.92 × 1017.19 × 1011.361.448.97 × 101
SD1.603 × 1011.381 × 10−13.006 × 10−14.40 × 1021.14 × 1011.16 × 1011.37 × 1018.04 × 102
Rank16524783
ZDT1Mean7.849 × 10−14.924 × 10−16.744 × 10−14.56 × 10−18.08 × 10−11.111.201.09
SD1.005 × 10−13.133 × 10−14.230 × 10−15.10 × 10−28.04 × 10−24.71 × 10−27.41 × 10−21.24 × 10−1
Rank42315786
ZDT2Mean2.339 × 10−15.747 × 10−11.1015.01 × 10−18.50 × 10−11.021.009.89 × 10−1
SD1.175 × 10−13.249 × 10−15.375 × 10−16.90 × 10−29.17 × 10−27.40 × 10−32.30 × 10−41.44 × 10−1
Rank13824765
ZDT3Mean6.281 × 10−15.084 × 10−18.692 × 10−15.28 × 10−18.16 × 10−11.301.289.78 × 10−1
SD1.150 × 10−12.534 × 10−17.403 × 10−11.02 × 10−19.78 × 10−21.09 × 10−11.19 × 10−11.06 × 10−1
Rank31524876
ZDT4Mean7.841 × 10−15.215 × 10−11.0119.36 × 10−11.011.049.81 × 10−11.04
SD8.153 × 10−24.647 × 10−15.481 × 10−13.25 × 10−21.1 × 10−14.24 × 10−20.006.61 × 10−2
Rank21535747
UF1Mean5.859 × 10−19.075 × 10−16.372 × 10−18.11 × 1017.16 × 1011.141.079.48 × 101
SD2.984 × 10−15.877 × 10−15.285 × 10−17.58 × 1021.05 × 1011.31 × 1015.99 × 1029.99 × 102
Rank15432876
UF2Mean2.886 × 10−18.539 × 10−11.0445.92 × 1016.49 × 10−11.341.051.00 × 101
SD2.145 × 10−15.205 × 10−17.942 × 10−16.26 × 1029.13 × 10−21.24 × 1012.98 × 1029.03 × 102
Rank25634871
UF3Mean8.723 × 10−25.565 × 10−13.369 × 10−18.61 × 1018.76 × 10−11.501.081.19
SD2.201 × 10−15.321 × 10−12.178 × 10−18.62 × 1021.12 × 10−11.77 × 1012.76 × 1022.52 × 101
Rank13245867
UF7Mean3.881 × 10−14.977 × 10−11.2528.87 × 1018.85 × 10−11.381.181.11
SD2.363 × 10−13.726 × 10−18.852 × 10−17.93 × 1021.01 × 10−11.81 × 1018.02 × 1021.68 × 101
Rank12743865
UF8Mean6.238 × 10−16.185 × 10−11.2767.36 × 10−11.011.151.078.40 × 10−1
SD2.687 × 10−13.627 × 10−16.372 × 10−13.96 × 10−21.18 × 10−17.79 × 10−26.88 × 10−24.15 × 10−2
Rank21835764
UF10Mean5.439 × 10−15.024 × 10−15.950 × 10−17.39 × 10−18.60 × 10−11.061.098.99 × 10−1
SD2.459 × 10−13.404 × 10−16.405 × 10−14.49 × 10−21.20 × 10−16.67 × 10−24.23 × 10−23.74 × 10−2
Rank21345786
Table 4. The results of generational distance (GD) for all benchmark functions.
Table 4. The results of generational distance (GD) for all benchmark functions.
Benchmark functionsMOHMICAPESA-IIMOEA\DNSGA-IIMOABCMOALOMOGOAMMOGWO
SCHMean1.837 × 10−42.245 × 10−42.752 × 10−49.41 × 10−49.94 × 10−48.78 × 10−49.63 × 10−49.44 × 10−4
SD1.686 × 10−52.865 × 10−51.847 × 10−55.13 × 10−54.73 × 10−51.16 × 10−46.85 × 10−56.86 × 10−5
Rank12358476
FONMean3.369 × 10−42.648 × 10−44.661 × 10−41.18 × 10−31.06 × 10−21.28 × 10−31.01 × 10−21.23 × 10−3
SD3.695 × 10−52.591 × 10−54.613 × 10−43.75 × 10−51.96 × 10−32.10 × 10−41.03 × 10−37.99 × 10−5
Rank21348675
ZDT1Mean3.192 × 10−48.538 × 10−36.435 × 10−34.78 × 10−39.73 × 10−26.70 × 10−48.55 × 10−32.34 × 10−4
SD1.083 × 10−42.564 × 10−37.331 × 10−34.47 × 10−32.37 × 10−21.32 × 10−32.65 × 10−21.37 × 10−4
Rank26548371
ZDT2Mean3.134 × 10−41.303 × 10−28.962 × 10−27.58 × 10−31.21 × 10−16.12 × 10−52.25 × 10−39.79 × 10−5
SD8.934 × 10−52.948 × 10−34.810 × 10−24.26 × 10−33.52 × 10−26.83 × 10−65.75 × 10−31.09 × 10−5
Rank36758142
ZDT3Mean5.102 × 10−41.022 × 10−22.626 × 10−26.98 × 10−36.56 × 10−21.22 × 10−34.70 × 10−36.16 × 10−4
SD5.726 × 10−53.748 × 10−37.629 × 10−35.77 × 10−32.21 × 10−26.85 × 10−46.78 × 10−39.65 × 10−5
Rank16758342
ZDT4Mean2.498 × 10−42.603 × 10−12.601 × 10−17.13 × 10−111.92.0614.86.10 × 10−1
SD3.085 × 10−51.645 × 10−11.840 × 10−12.84 × 10−15.59 × 10−16.65 × 10−12.116.53 × 10−1
Rank13258674
UF1Mean6.493 × 10−34.351 × 10−14.918 × 10−13.21 × 10−22.72 × 10−28.32 × 10−31.14 × 10−25.42 × 10−3
SD9.354 × 10−48.314 × 10−21.727 × 10−11.46 × 10−29.75 × 10−35.29 × 10−36.19 × 10−34.17 × 10−3
Rank27865341
UF2Mean7.537 × 10−39.090 × 10−38.073 × 10−31.42 × 10−29.60 × 10−31.43 × 10−22.66 × 10−37.37 × 10−3
SD1.917 × 10−32.782 × 10−32.195 × 10−35.33 × 10−32.80 × 10−34.77 × 10−32.76 × 10−32.35 × 10−3
Rank35476812
UF3Mean2.766 × 10−22.635 × 10−15.185 × 10−13.20 × 10−21.39 × 10−12.94 × 10−21.87 × 10−23.24 × 10−2
SD6.751 × 10−21.350 × 10−11.510 × 10−18.99 × 10−33.07 × 10−16.64 × 10−34.53 × 10−33.46 × 10−3
Rank27846315
UF7Mean3.498 × 10−34.398 × 10−35.600 × 10−22.87 × 10−22.75 × 10−25.84 × 10−35.37 × 10−32.53 × 10−3
SD1.584 × 10−31.566 × 10−32.156 × 10−11.76 × 10−21.18 × 10−25.01 × 10−31.85 × 10−36.57 × 10−4
Rank23876541
UF8Mean1.503 × 10−29.907 × 10−27.081 × 10−25.26 × 10−11.85 × 10−21.99 × 10−25.70 × 10−22.03 × 10−1
SD1.600 × 10−27.682 × 10−26.364 × 10−21.14 × 10−11.52 × 10−21.45 × 10−27.03 × 10−27.12 × 10−2
Rank16582347
UF10Mean2.892 × 10−23.8832.0701.343.85 × 10−13.49 × 10−12.88 × 10−14.93 × 10−1
SD7.995 × 10−31.2961.4852.95 × 10−12.25 × 10−17.70 × 10−24.31 × 10−21.89 × 10−1
Rank18764325
Table 5. The results of inverted generational distance (IGD) for all benchmark functions.
Table 5. The results of inverted generational distance (IGD) for all benchmark functions.
Benchmark FunctionsMOHMICAPESA-IIMOEA\DNSGA-IIMOABCMOALOMOGOAMMOGWO
SCHMean3.071 × 10−28.118 × 10−24.315 × 10−22.00 × 10−32.07 × 10−31.83 × 10−32.10 × 10−32.02 × 10−3
SD1.025 × 10−27.402 × 10−21.123 × 10−21.35 × 10−41.18 × 10−43.18 × 10−41.74 × 10−41.71 × 10−4
Rank68724153
FONMean7.468 × 10−32.263 × 10−11.063 × 10−11.01 × 10−23.75 × 10−21.11 × 10−29.70 × 10−21.08 × 10−2
SD8.528 × 10−42.592 × 10−36.293 × 10−23.96 × 10−49.09 × 10−32.13 × 10−31.14 × 10−28.48 × 10−4
Rank18725463
ZDT1Mean7.408 × 10−38.114 × 10−26.826 × 10−23.72 × 10−23.02 × 10−11.57 × 10−32.58 × 10−21.18 × 10−3
SD1.136 × 10−32.040 × 10−27.086 × 10−24.33 × 10−25.59 × 10−29.67 × 10−32.33 × 10−14.01 × 10−4
Rank37658241
ZDT2Mean6.696 × 10−31.776 × 10−19.824 × 10−18.31 × 10−23.02 × 10−15.40 × 10−48.79 × 10−48.52 × 10−4
SD6.855 × 10−41.910 × 10−15.170 × 10−14.28 × 10−25.17 × 10−37.52 × 10−56.95 × 10−31.06 × 10−4
Rank46857132
ZDT3Mean5.766 × 10−32.589 × 10−28.298 × 10−22.91 × 10−21.21 × 10−14.40 × 10−31.46 × 10−22.74 × 10−3
SD8.433 × 10−46.404 × 10−31.750 × 10−23.86 × 10−23.43 × 10−22.80 × 10−34.68 × 10−23.71 × 10−4
Rank35768241
ZDT4Mean6.061 × 10−32.3672.5586.222.1718.515.34.04
SD6.388 × 10−41.4841.5862.858.92 × 10−15.253.38 × 10−14.16
Rank13462875
UF1Mean1.055 × 10−13.8524.0762.11 × 10−17.65 × 10−25.12 × 10−27.82 × 10−22.42 × 10−2
SD3.389 × 10−31.779 × 10−14.040 × 10−19.24 × 10−22.10 × 10−25.15 × 10−23.65 × 10−23.80 × 10−2
Rank57863241
UF2Mean1.000 × 10−19.146 × 10−21.419 × 10−17.14 × 10−23.92 × 10−21.07 × 10−11.28 × 10−24.70 × 10−2
SD8.393 × 10−32.784 × 10−21.840 × 10−22.51 × 10−27.31 × 10−34.25 × 10−22.47 × 10−21.44 × 10−2
Rank65842713
UF3Mean2.508 × 10−11.7394.1673.02 × 10−13.31 × 10−11.91 × 10−11.56 × 10−12.53 × 10−1
SD6.304 × 10−28.475 × 10−18.612 × 10−18.20 × 10−26.92 × 10−28.63 × 10−24.74 × 10−26.05 × 10−2
Rank37856214
UF7Mean5.428 × 10−29.423 × 10−21.486 × 10−12.54 × 10−17.20 × 10−23.03 × 10−22.66 × 10−22.11 × 10−2
SD1.141 × 10−21.164 × 10−25.324 × 10−21.55 × 10−12.30 × 10−24.69 × 10−21.91 × 10−25.04 × 10−3
Rank46785321
UF8Mean1.649 × 10−11.3439.484 × 10−14.641.89 × 10−21.50 × 10−12.11 × 10−11.95
SD4.118 × 10−26.953 × 10−14.026 × 10−19.59 × 10−11.83 × 10−21.42 × 10−15.97 × 10−17.10 × 10−1
Rank36581247
UF10Mean2.562 × 10−137.9625.6111.75.63 × 10−13.282.714.93
SD5.884 × 10−212.2813.102.623.40 × 10−17.75 × 10−14.30 × 10−11.86
Rank18763245
Table 6. Average ranking of each algorithm on each metric.
Table 6. Average ranking of each algorithm on each metric.
MetricsMOHMICAPESA-IIMOEA\DNSGA-IIMOABCMOALOMOGOAMMOGWO
CM1.4674.2674.53344.23.63.82.933
DM1.62.1334.1332.1333.26765.3334.067
GD1.444.6674.45.1333.23.4672.733
IGD2.6675.1335.4674.23.5332.5332.9332.333
Table 7. Results of the Wilcoxon test results of each metric by MOHMICA and other multi-objective algorithms.
Table 7. Results of the Wilcoxon test results of each metric by MOHMICA and other multi-objective algorithms.
MOHMICA
vs.
CMDM
R+ R− α = 0.01 α = 0.05 α = 0.1 R+ R− α = 0.01 α = 0.05 α = 0.1
PESA-II762H1H1H15325H0H0H0
MOEA\D780H1H1H1753H1H1H1
NSGA-II780H1H1H15820H1H1H1
MOABC744H1H1H1780H1H1H1
MOALO771H1H1H1780H1H1H1
MOGOA744H1H1H1780H1H1H1
MMOGWO6513H0H1H1762H1H1H1
MOHMICA
vs.
GDIGD
R+R− α = 0.01 α = 0.05 α = 0.1 R+R− α = 0.01 α = 0.05 α = 0.1
PESA-II762H1H1H1771H1H1H1
MOEA\D780H1H1H1780H1H1H1
NSGA-II780H1H1H1744H1H1H1
MOABC780H1H1H16513H0H1H1
MOALO771H0H1H14335H0H0H0
MOGOA6513H0H1H14434H0H0H0
MMOGWO6018H0H0H03939H0H0H0
Table 8. Comparison of radar map area calculated by eight algorithms for benchmark function.
Table 8. Comparison of radar map area calculated by eight algorithms for benchmark function.
Benchmark Functions MOHMIICA PESA-II MOEA\D NSGA-II MOABC MOALO MOGOA MMOGWO
SCHArea32.03930.54629.18831.50229.57329.11829.37729.652
Rank13725864
FONArea34.58627.33827.19928.59220.65125.75117.08926.727
Rank13427685
ZDT1Area32.17019.39719.74322.00012.79031.19319.25336.284
Rank26548371
ZDT2Area35.13917.06210.12819.52812.46841.67031.03239.188
Rank36857341
ZDT3Area31.47120.55713.68521.35514.69127.81621.67335.858
Rank26857341
ZDT4Area33.4078.1738.3345.3294.7153.2412.3236.052
Rank13265784
UF1Area20.2446.5336.64814.63517.52218.84417.67621.190
Rank28765341
UF2Area20.89218.0818.03118.63520.68816.29923.85623.463
Rank36754812
UF3Area18.5178.6367.06913.75112.08114.03715.45913.754
Rank17856324
UF7Area22.81621.07915.97214.26117.43919.85721.02423.255
Rank24876531
UF8Area17.81010.27310.6106.26020.81215.19812.7718.328
Rank26581347
UF10Area14.8842.4933.2094.2469.8896.7707.3286.083
Rank18762435
Mean area26.16415.84714.15116.67416.10920.81618.23822.486
The rank of mean area17856342
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luo, J.; Zhou, J.; Jiang, X.; Lv, H. A Modification of the Imperialist Competitive Algorithm with Hybrid Methods for Multi-Objective Optimization Problems. Symmetry 2022, 14, 173. https://doi.org/10.3390/sym14010173

AMA Style

Luo J, Zhou J, Jiang X, Lv H. A Modification of the Imperialist Competitive Algorithm with Hybrid Methods for Multi-Objective Optimization Problems. Symmetry. 2022; 14(1):173. https://doi.org/10.3390/sym14010173

Chicago/Turabian Style

Luo, Jianfu, Jinsheng Zhou, Xi Jiang, and Haodong Lv. 2022. "A Modification of the Imperialist Competitive Algorithm with Hybrid Methods for Multi-Objective Optimization Problems" Symmetry 14, no. 1: 173. https://doi.org/10.3390/sym14010173

APA Style

Luo, J., Zhou, J., Jiang, X., & Lv, H. (2022). A Modification of the Imperialist Competitive Algorithm with Hybrid Methods for Multi-Objective Optimization Problems. Symmetry, 14(1), 173. https://doi.org/10.3390/sym14010173

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop