A Self-Organizing Multimodal Multi-Objective Coati Optimization Algorithm

: The Coati Optimization Algorithm (COA) has emerged as a prominent evolutionary algorithm renowned for its efficacy in addressing real-world problems. Its wide-ranging applicability across diverse domains is a testament to its exceptional performance and versatility. Compared to other evolutionary algorithms, COA has been proven to possess excellent global and local search capabilities. This paper introduces a novel self-organizing multimodal multi-objective Coati Optimization Algorithm (MMOCOA) designed specifically to tackle multimodal multi-objective problems. The proposed algorithm aims to effectively handle the complexities associated with such problems by incorporating self-organizing mechanisms into the Coati optimization framework. Primarily, MMOCOA utilizes a self-organizing speciation method as its primary approach to identify the Pareto optimal solutions. This speciation tactic can establish stable niches and continually updates them to actively search for and preserve the optimal Pareto solutions. Furthermore, an improved self-organization mechanism is proposed to enhance the generation speed of the niches. Additionally, MMOCOA incorporates a non-dominated sorting method and a specialized crowding distance technique to effectively preserve the diversity of both the decision and objective space. To assess the effectiveness of MMOCOA, this study presents a comprehensive evaluation using eleven multimodal multi-objective test functions. Additionally, MMOCOA is benchmarked against five state-of-the-art multimodal multi-objective optimization algorithms. The experimental results highlight the superior performance of MMOCOA, as it demonstrates the capability to discover a larger number


Introduction
Multi-objective problems have been widely used for various real-life problems.In contrast to single-objective optimization problems, multi-objective optimization problems entail the consideration of multiple conflicting objectives, thus requiring a trade-off when attempting to optimize one objective: improving the value of one might affect the value of another.For generality, a minimum description for such multi-objective problems can be expressed as follows.( ) 0( 1, 2, ..., ) s g x s l  and t ( )=0 (t 1, 2, ..., ) x k h  indicate the imitations that the solution must satisfy in addition to optimizing the objective functions.
If a solution x satisfies all the specified constraints, it is referred to as a feasible solution.Different feasible solutions can be evaluated using dominance relations In multi-objective optimization problems, for feasible solution 1 x and feasible solution 2 x , 1 x dominates 2 x requiring two conditions to be satisfied.1)Thefeasible solution 1 x is better than or equal to 2 x in all objective functions, 12 . .( ) ( ) ( 1, ..., ) ii i e f x f x i n    .
2)The feasible solution 1 x is superior to 2 x at least one objective function, .
A feasible solution x is deemed as a non-dominated solution if none of the other feasible solutions dominate it.The collection of non-dominated solutions is commonly referred to as the Pareto-optimal set (PS) and the collection of vectors that correspond to PS in the objective space is referred to as the Pareto front (PF).
There may be two or more detached Pareto-optimal sets (PSs) of which correspond to the same PF in the multi-objective optimization problem, Liang et al [1] defines this type of problems as a multimodal multi-objective problems (MMOPs).Having multiple PSs is beneficial for decision makers as it provides them with a range of feasible solutions to choose from.Although a single PS is generally sufficient for solving a problem, finding several PSs can have advantages such as increased efficiency for decision makers.Therefore, when working on MMOPs, the opportunity for obtaining numerous PSs in the decision space must be taken into account.
Due to its capacity to address multi-objective problem challenges, multi-objective optimization algorithms have garnered a lot of attention in recent years.As the outcome, numerous unique multiobjective optimization techniques comprised of sophisticated optimization algorithms for various groups have been proposed.Schaffer et al. (1985)  [2] presented the Vector Evaluation Genetic approach (VEGA).However, this approach is prone to converge quickly to certain regions.Kalyanmoy Deb et al (1994) [3] .Introduced non-dominated sorting and niches approaches in the genetic algorithm (NSGA) to locate multiple PSs, this algorithm excludes the disadvantage of premature convergence in some regions of VEGA and enables the populations to be distributed over the entire Pareto optimal region.The high computational complexity, non-elitism policy, and requirement for common parameters are a few cons of NSGA, though.Kalyanmoy et al. (2002) [4] developed a quick and exclusive multi-objective genetic algorithm (NSGA-ll) to handle these problems.This algorithm implements a selection operator that combines parents and children to generate a mating pool, then selects the N greatest feasible solutions from this pool.NSGA-ll utilizes a fast non-dominated sorting procedure, an elite-preserving mechanism, and a parameter niching operator, and employs the crowded comparison technique instead of the shared function technique.Consequently, the algorithm's temporal complexity will be reduced, and there won't be any parameters to specify.However, the elite preserving technique may be incapable of storing plenty of non-dominated solutions, and the computing cost of the crowded distance method is expected to increase as the amount of objective functions expands.Kalyanmoy et al. (2013) [5] developed an Evolutionary Many-Objective Optimization Algorithm based on the Reference-Point-Based Nondominated Sorting Approach (NSGA-lll).This approach prioritizes the size of non-dominated populations, increases population variety, and is well-suited for handling multi-objective optimization problems.However, all of the algorithms discussed above are concerned with the diversity, feasibility, and convergence of solutions in the objective space, instead of only a handful are focused with the allocation of feasible solutions in the decision space.Liang et al. ( 2016) [1] proposed the decision space Base on niching approach multi-objective evolutionary algorithm (DN-NSGAII), which not only notices solutions in the objective space but also finds the majority of solutions in the decision space.Obtaining all Pareto optimal solutions in the decision space, on the other hand, remains a significant difficulty.Liu et al. (2019) [6] proposed a novel multimodal multi-objective evolutionary algorithm that uses a density-based one-by-one update strategy to estimate the overall density of solutions in the decision space while attempting to maintain the population's diversity in the decision space.
Mohammad Dehghani et al. [7] introduced the Coati Optimization Algorithm (COA) In 2022, a brand-new meta-heuristic method for optimization problems that models two of the Coati's natural behaviors: attacking and hunting iguanas and evading predators.According to simulation studies, COA is superior to existing swarm intelligence optimization algorithms in terms of both global and local search capabilities.It has also shown to be more successful in solving real-world optimization issues.
To efficiently solve MMOPs, a self-organizing multimodal multi-objective Coati optimization algorithm (MMOCOA) is put forth in this study.This study's main contributions include describing the Coati optimization algorithm's implementation process, introducing a self-organized speciation method to create stable niches (sub-populations), exploring and maintaining the Pareto optimal solution set continuously, using a special crowding distance technique and non-dominated sorting methods to maintain diversity in the objective and decision spaces, and evaluating MMOCOA's performance on eleven multimodal multi-objective test functions against five other multi-objective optimization techniques.
The remaining parts of this paper are structured as follows: Chapter 2 introduces the Coati optimization algorithm and the MMOPs; Chapter 3 describes the MMOCOA's detailed implementation process; Chapter 4 presents the paper's experimental findings and analysis; and Chapter 5 offers pertinent conclusions and a look ahead to future research.

Related work 2.1 Coati optimization algorithm
The Coati optimization algorithm mimics two of the coatis' natural behaviors: attacking and capturing iguana behavior and escaping from predators behavior.Firstly, Half of the coatis go to intimidate the iguana, and the mathematical simulation of the process is illustrated in equation (2).
Iguana are deemed to occupy the ideal place within the population.Then, the iguana jumps to a random position and the other half of the coatis attempt to capture it; the mathematical simulation of this process is illustrated in equations ( 3) and (4).
..., 1, 2,..., : where is the updated location of the i th coati, is an upward rounding symbol.The content that follows is an in-depth explanation of the process of escaping from the predator: when the predator attacks the coatis.The coatis flee the present spot in search of a safer area closer to it.The mathematical simulation of the process is illustrated in equations ( 5) and (6).
..., , 1, 2,..., , Where is the objective value corresponding to is the total number of COA updates., j lb and j ub are the decision space's maximum as well as minimum values, respectively.

Multimodal multi-objective problems
The coupling relationship between the decision space and the objective space, particularly the PS to PF mapping relationship, is at the heart of the problem for MMOPSs.The former determines the search difficulty, and the latter determines the value of the PS.The mapping of the relationship between PS and PF of the MMOPs is generally one-to-one, while there may be circumstances where it is multiple-to-one.Liang et al.(2016)  [1]  investigated numerous PSs that corresponded to the same PF problem and noted define it as a multimodal multi-objective problem.Then Tanabe et al.(2017)  [8]  presented a decomposition-based evolutionary algorithm for MMOPs.A group of complex multimodal multi-objective test functions were developed by Liu et al. (2019), and they additionally suggested an efficient MMODE (multimodal multi-objective algorithm combined with differential evolution) algorithm [9]  .Liang et al [10]  developed a collection of multimodal multi-objective evaluation questions with various characteristics based on the prior research.such as issues with various PS and PF forms, issues with the coexistence of local and global PS, and issues with scalable PS quantities, decision factors, and objectives.
There are multi-modal multi-objective optimization issues with numerous PSs corresponding to the same PF, as illustrated in Figure 1, which shows two PSs corresponding to the same PF.

The self-organized speciation
The multi-population method is often used to address multimodal problems [11][12] .The size of each sub-population is usually preset when dividing the population into sub-populations [13] , which can sometimes lead to inappropriate allocation [14] , lowering algorithm performance or resulting in an incomplete PS.Currently, multiple swarming tactics have been employed in MMOPs, but the existing methods still have plenty of shortcomings.For example, MO_Ring_PSO_SCD [15] constructs several sub-populations using a ring topological framework.However, it selects the neighborhood based on an individual index, which occasionally fails to reflect the true distribution in the decision space, potentially influencing the exploration effectiveness of subpopulations.SSMOPSO [16] used a selforganizing mechanism approach to form subpopulations.While this approach improved the effectiveness of the algorithm and reduced the number of overlapping individuals between subpopulations, the seed of the subpopulation may not be the best Pareto optimal solution which can lead to individuals in this subpopulation moving towards local optimal solutions, thus making it less conducive to obtaining all PSs.
To alleviate the drawbacks of the previously described population techniques, this paper proposes an improved self-organized mechanism.This approach uses a self-organized mechanism to form subpopulations, ensuring that the seed of the said subpopulation is a non-dominated individual instead of a local Pareto optimal solution.Since all individuals in the subpopulation will then move toward this non-dominated individual, this enhanced approach improves the overall algorithm performance and its global search ability.The details of this improved self-organizing mechanism can be found in the following section.

The improved self-organized mechanism
We present a superior self-organizing mechanism that produces subpopulations using a selforganizing technique in the paper.To begin, all solutions that are not dominated in the population are retrieved and saved in the variable P .The subpopulation's radius R is then determined.Following that, a seed for the subpopulation is chosen based on the non-dominated rank of its individuals, individuals within the radius R from the seed are assigned to establish the subpopulation depends on the Euclidean distance, and the individuals assigned to the subpopulation are then removed from the initial population.The assignment step is stopped once every member of the population has been assigned to the subpopulation.Last but not least, the method verifies whether all seeds in the subpopulations are non-dominated solutions; if not, it adds the subpopulation to the subpopulation with the closest non-dominated solution seeds.As indicated in Figure 2  The implementation process of the self-organizing speciation method based on the improved self-organizing mechanism is outlined in Table 1.The population is first sorted by the non-dominated sorting method employing special crowding distances [1] and saved in sort P in order of ascending.The first one in sort P  is then chosen as the subpopulation's seed.The Euclidean proximity of the population to the seed is computed, and individuals who fall within the Euclidean distance R from the seed are segregated into the same subpopulation.Individuals designated to subpopulations are subsequently removed from the initial population.The aforementioned processes are repeated until the entire population has been assigned to a subpopulation.Lastly, the algorithm checks whether the seeds in all subpopulations are non-dominated individuals; if not, it adds the subpopulation to which the seed belongs to the subpopulation of the nearest non-dominated seed.

Procedure of MMOCOA
The primary purpose of Multimodal Multi-Objective Optimization Algorithms is to identify and retain several PSs in the decision space.To achieve this, the improved self-organizing mechanism method is utilized to construct the subpopulation and locate multiple PSs, while Non-dominated-SCD-sort is used to preserve them.Thus, by combining the Non-dominated-SCD-sort with the selforganized speciation method, this strategy has sufficient advantages to solve MMOPs.Iguana signifies the decision space's random position in the j th subpopulation.The MMOCOA algorithm's process is explained in the following manner.The population as a whole is initialized first, and then the members of the population are ordered using Non-dominated-SCD-sort.Following sorting, several subpopulations are created using the selforganizing speciation technique mentioned in Algorithm 1. Next, the seed in the j th subpopulation is selected as j Iguana and the random position in the decision space is chosen as Iguana .Finally, the population j (t) POP is updated to j ) (t+1 POP in accordance with equations ( 2), ( 3), ( 4), and equations ( 6), (7).Until the termination criteria is met, this process is stopped.

Test evaluation functions
Eleven test functions are used in this paper's evaluation of MMOCOA's performance.These test functions are MMF1-MMF2 [1] , MMF3-MMF8 (suggested by Yue et al. [15] ), SYM-PART simple and SMY-PART rotational, Omin-test function (n=3), and two complicated benchmark functions.There are two PSs for each of these functions (MMF1, MMF2, MMF3, and MMF7), while there are four PSs for the remaining MMF test functions.The Omin-test function (n=3) contains 27 PSs, but the SYM-PART issues have nine PSs.

Performance indicators
In this research paper, we analyze the performance of MMOCOA using Inverted Generational Distance (IGDX) [18] , and Pareto Sets Proximity (PSP) [15] .IGDX is the average Euclidean distance between the reference points (true PS) and the obtained PS in the decision space.* S stands for a collection of reference points that are evenly spaced along the actual PS, and K stands for a collection of final population solutions that are not dominated.The average distance in the decision space from * S to K is then used to determine IGDX, as shown below: where v indicates a true PS in * S and min_ ( , ) K dv represents the shortest Euclidean distance between v and the point in K .The IGDX value represents the diversity and convergence of solutions in the decision space; the lower the value, the closer the obtained PS are to the genuine reference points (true PS), and the better the algorithm's performance.IGDX evaluates the quality of the obtained solutions in the decision space.The PSP indicator reflects the similarity between the obtained PSs and true PSs, and the PSP is Where IGDX is the Inverted Generational Distance in the decision space, and CR exhibits the overlap ratio between the true PSs and the obtained PSs.The equation for CR is as follows: x( , ) ) where n is the dimension of the decision space, and

Comparison with other algorithms
To demonstrate the performance of MMOCOA, this paper compares the effectiveness of MMOCOA with five cutting-edge algorithms: MO_Ring_PSO_SCD [15] , SSMOPSO [16] , DN-NSGAll [1] , NSGA-ll [4] , and Omni_Opt [19] .The other five comparison algorithms' parameters are identical to those listed in the original research.The 11 benchmark test functions are subjected to all experiments individually for a total of 30 times, with the average value being used to determine the experiment's ultimate outcome.All algorithms' iteration counts are set to 1000, while the population sizes are set to 100.The experiments are carried out using MATLAB 2021 on a computer running the 64-bit version of Windows 11 and powered by an Intel(R) Core(TM) i7-8750H processor clocked at 2.20 GHz and 2.21 GHz and 8 GB of RAM.

The results of PSP and IGDX on algorithms
The averages and standard deviations of the PSP and IGDX indicator values that the six comparison algorithms produced for each test function are displayed in Tables 3 and 4, respectively.Wilcoxon's rank sum test was utilized to show any notable differences between MMOCOA and the comparative algorithms at a level of significance of 0.05 after thirty independent runs on each test function.The findings of MMOCOA are markedly better, worse, and similar to the comparison algorithm, respectively, by the symbols "+", "-," and "=".
On the 11 multimodal multi-objective test functions, the performance of the proposed MMOCOA in this paper is much superior than the other five multimodal multi-objective algorithms.As shown in Table 3, MMOCOA has the highest average PSP values on MMF1, MMF4, MMF5, MMF6, and MMF7 test functions.Furthermore, MMOCOA obtains weaker average PSP values than SSMOPSO on MMF2 test functions.Additionally, MMOCOA obtains weaker average PSP values than SSMOPSO and MO_Ring_PSO_SCD on MMF8 and MMF3 test function, yet still higher than DN-NGSAll, Omin_Opt, and NSGA-ll.These outcomes show that the suggested MMOCOA algorithm works well on part of all of the benchmark functions, whereas SSMOPSO and MO_Ring_PSO perform well on the other part; however, these benchmark functions only involve 2 and 4 PSs, respectively.The number of PSs for SYM-PART simple, SYM-PART rotated, and Omni-test is relatively big for the remaining three benchmark functions.When compared to other algorithms, MMOCOA's PSP average value performed the best on SYM-PART simple and SYM-PART rotated test functions.Even while the average PSP values achieved by MMOCOA were lower than the averaged PSP values acquired by MO_Ring_PSO_SCD and SSMOPSO on the Omin-test test function, they were still higher than the average PSP values obtained by the other comparison algorithms.
According to the statistical data, the suggested MMOCOA earned the best average PSP values on all seven multimodal multi-objective test functions.On the MMF1, MMF5, MMF6, SYM-PART simple, and SMY-PART rotated test functions, there was no significant difference between MMOCOA and SSMOPSO.However, the average value acquired by MMOCOA was much lower than that obtained by SSMOPSO in the MMF2 and MMF3 test functions, and the average PSP value obtained by MMOCOA on the Omin-test test function was weaker than that obtained by SSMOPSO and MO_Ring_PSO_SCD.Overall, the findings show that MMOCOA performs well on all benchmark functions, despite poor performance on some test functions, such as MMF2 and Omintest.Lower IGDX values [17] are preferable, and SSMOPSO delivers the smallest IGDX values among the MMF2, MMF3, MMF8, and Omin-test algorithms.Similarly, MMOCOA achieves the smallest IGDX values on MMF1, MMF4-MMF7, and the SYM-PART simple test function, while MO_Ring_PSO_SCD has the lowest IGDX value on MMF8.The IGDX values gained from SSMOPSO and MMOCOA are similar on the SYM-PART simple, but MMOCOA performs worse than SSMOPSO on MMF2, MMF3, and MMF8.Furthermore, MMOCOA is less efficient than MO_Ring_PSO_SCD on MMF8.Notably, MMOCOA outperforms NSGA-11, DN-NSGAll, and Omni_Opt in terms of IGDX values on all MMF test functions.Consequently, MMOCOA performed satisfactorily when tested against five other multimodal multi-objective algorithms on the MMF test tasks.However, only 2 and 4 PSs were considered for these benchmark tests.For the remaining three problems with a higher number of PSs, Table 4 shows that the IGDX values obtained by MMOCOA are weaker than those obtained by SSMOPSO and MO_Ring_PSO_SCD on the Omin-test test function, similar to those obtained by SSMOPSO on the SYM-PART test functions, weaker than those obtained by MO_Ring_PSO_SCD on the SMY-PART rotated test function, and outperformed DN_NSGAII, Omni_Opt, and NSGA in three test functions.According to the Wilcoxon rank sum test, the IGDX values obtained by MMOCOA are significantly superior than those obtained by NSGA, DN_NSGAII, Omni_Opt, SSMOPSO, and MO_Ring_PSO_SCD in 11, 11, 11, 7, and 7 of 11 comparisons, respectively.Overall, the MMOCOA method produces excellent outcomes for the MMF problems, but it performs poorly for situations with a large number of PSs.It reveals that the MMOCOA method is appropriate for handling issues with fewer PSs.

Conclusion
To solve MMOPs, this paper proposes a self-organizing multimodal multi-objective Coati optimization algorithm (MMOCOA).The speciation technique is employed in MMOCOA to produce several enduring niches/subpopulations, and individuals in the subpopulations advance in parallel towards a large number of PSs.In the meantime, a novel self-organized method has been introduced to increase the original speciation strategy's effectiveness and reliability.
The results of the experiment reveal that MMOCOA outperforms other algorithms in terms of performance and capacities to keep more PSs in the decision space.However, MMOCOA performs poorly on multimodal multi-objective evaluations with irregular PS or numerous PSs, such as MMF3, MMF8, and Omin-test.In the future, we will increase MMOCOA's performance so that it can solve test functions with irregular PSs or a high number of PSs, and it will be used to solve real-world MMOPs.

 12 ,
Indicates variables in the objective space and m represents the dimensionality of the objective vector, decision space, n represents the dimensionality of the in decision space.

12 .
chosen at random from [0, 1], Iguana denotes the position of iguana in the search space, which represents the best position in the population, and j Iguana denotes Iguana 's j th dimension.
, triangles A and G represent non-dominated individuals in the initial population, whereas circle D represents the dominated individual.Triangle A is selected as the seed of the first subpopulation because it is the most excellent non-dominated individual in the initial population.Those within R of the seed A are assigned to the identical subpopulation, resulting in the first subpopulation containing A, B, and C. G is the most excellent non-dominated individual among the remaining individuals in the population and picked as the seed of the 2nd subpopulation.Individuals H and I are included inside the radius R, resulting in the second subpopulation of individuals G, H, and I. Individual D is then the best of the remaining individuals in the population, with individuals E and F falling inside the radius R. As a result, the third population includes those D, E, and F. Since Individual D is not a non-dominated individual in the original population, and the closest non-dominated one from individual D is A, therefore the third subpopulation is integrated into the first.The seed of the subpopulation is the greatest one of the original population, guiding its members to better place.As the subpopulations evolve, they are capable of finding several PSs in the decision space.

Figure 2 : 14 _
Figure 2: The fundamentals of the improved self-organized mechanismTable 1: Pseudo-code for the self-organized speciation the non-dominated individuals in the POP After constructing the subpopulations, all individuals in the subpopulation move towards their respective seeds.The process of implementing MMOCOA is outlined in Table2, where POP represents the whole population, k j () POP t reflects the value of the j th individual in the k th subpopulation in the t th iteration, and () j POP t denotes the t th cycle of the j th subpopulation.j Iguana denotes the seed of the j th subpopulation, and G j minimum and maximum values in the l dimension of the obtained PSs, and max and minimum values of the l dimension of the true PSs, respectively.The larger PSP values are desirable. ,

Table 3 :
Comparison of PSP values of MMOCOAs and five comparison algorithms

Table 4 :
Comparison of IGDX values of MMOCOAs and five comparison algorithms