Providing the best solution while respecting the limitations of the given problem is the main goal in optimization. An optimization problem can have several solutions that in order to compare these solutions and select the appropriate solution, the main criterion is the value of the objective function (Dhiman, 2021). Optimization is an important and critical activity in many fields of economics, industry and other sciences. The methods proposed to solve the optimization problems are deterministic methods and stochastic methods. In deterministic methods, gradient-based classes that use gradient information for finding the global optimal solution are the mathematical programming methods, containing nonlinear and linear programming (Faramarzi & Afshar, 2014), while nongradient-based classes use conditions for finding the global optimal solution (Lera & Sergeyev, 2018). One essential difficulty of mathematical programming approaches is the high probability of getting caught in local optimal solutions during the scan of the nonlinear search space. To overcome this potential pitfall, existing methods are modified or combined with other algorithms that are used only for certain problems.
Among the difficulties of nongradient-based deterministic methods are the hardness of implementation and the need a high level of mathematical knowledge in order to apply (Doumari et al., 2021b).
Many optimization problems are more complex than can be solved with classical and deterministic computational methods (Cavazzuti, 2013). One of the most promising and important study areas in recent years, has been the design of innovative stochastic methods called optimization algorithms. One of the available solutions to deal with such problems is the use of optimization algorithms. Another reason for using optimization algorithms is too much and impossible time of deterministic mathematical methods to solve optimization problems with many and complex parameters (Dhiman & Kumar, 2017). Optimization methods have similarities to social, natural and physical systems, as well as other processes that can be modeled as optimizers. The structure of these methods is derived from the optimization process in those systems, which have had good results in solving problems with complex structures (Dehghani, Hubálovský & Trojovský, 2021). In most of these methods, the search operation begins by producing a random population in the search area. Then, using the computational intelligence in the algorithm, the solutions are moved to the search space. This displacement is such that after passing through several iterations of the algorithm, the population converges toward the optimal point (Dehghani & Trojovský, 2021). The strategy of changing the status of population members and moving them in the search space is the most important difference of optimization algorithms. In recent years, the development and use of optimization algorithms has grown significantly.
Global optimum is the best solution to an optimization problem. The most important challenge in optimization algorithms is that due to the randomness of the search process, the proposed solutions of these methods are not exactly the same as the global optimum. Therefore, the proposed solution to a given problem using optimization algorithm is a quasi-optimal which is at best equal to the global optimum (Dehghani et al., 2020c). Therefore, it can be said that a quasi-optimal solution that is closer to the global optimal is a more appropriate solution. This led researchers to develop numerous optimization algorithms to provide better quasi-optimal solutions.
Another important issue in optimization studies is the acceptance of the fact that there is no algorithm that has the best performance in solving all optimization problems. According to the no free lunch (NFL) theorem (Wolpert & Macready, 1997), if an algorithm is highly capable of solving one or more optimization problems, no assurance can be given that it can also solve other problems. NFL theorem encourages and requires scholars to design new algorithms for solving optimization problems in different applications.
A new evolutionary optimizer called Average and Subtraction-Based Optimizer (ASBO) is designed for use in solving various optimization problems in this paper. The scientific contribution of this research can be listed as follows:
ASBO is designed based on the idea of using average information and subtracting the best and worst population members for guiding the population toward the optimal solution.
The various steps of ASBO are expressed, and then the concepts expressed in these steps are mathematically modeled.
Twenty-three standard benchmark functions including seven unimodal functions, six high dimensional multimodal functions, and 10 fixed dimensional multimodal functions have been employed for ASBO evaluation.
The optimization results obtained from the implementation of ASBO in optimizing these objective functions are analyzed against the performance of nine well-known algorithms.
The findings and simulation results indicate the capability of the proposed algorithm to effectively solve problems and its superiority over nine compared algorithms.
The rest of the paper is organized as follows: In the second section a lecture review is studied; the problem definition and the formulation are presented in the third section; the proposed ASBO is introduced and modeled in the fourth section; ASBO simulation and optimization results are studied in the fifth section; the sixth section discusses the results; and finally, in the seventh section, conclusions and several perspectives for ASBO are provided.
Stochastic optimization algorithms have shown an acceptable ability to effectively solve optimization problems by providing appropriate solutions. These algorithms can be employed in various issues such as cloud computing (Prakash & Bala, 2014a, 2014b; Prakash, Bawa & Garg, 2021), cross-platform applications (Vassallo et al., 2019), engineering (Dehghani et al., 2020b), energy commitment (Dehghani, Montazeri & Malik, 2019) and other optimization challenges in science fields. Stochastic optimization algorithms can be divided into four types, namely evolutionary, swarm, physics and game-based optimization algorithms with respect to the central idea in their design.
Evolutionary-based techniques are developed based on simulation of evolutionary theory and biology sciences. One of the most famous and oldest evolutionary algorithms that uses evolutionary biology techniques such as inheritance and mutation is the Genetic Algorithm (GA). GA is a programming method that applies genetic evolution and Darwin’s theory of evolution by natural selection as a problem-solving technique. Organisms that have more capabilities and abilities to perform activities in the environment will have a higher birth rate, and naturally, organisms that are less compatible with the environment will have a lower birth rate. After several periods of time and several generations, the population tends to have more organisms whose chromosomes are more compatible with the environment. Over time, the composition of the individuals in society changes due to natural selection, and this is a sign of population evolution (Goldberg & Holland, 1988). Evolution Strategy (ES) (Beyer & Schwefel, 2002), Genetic Programming (GP) (Banzhaf et al., 1998), Biogeography-based Optimizer (BBO) (Simon, 2008), and Differential Evolution (DE) (Storn & Price, 1997) are some other evolutionary-based algorithms.
Swarm-based techniques mimic various natural phenomena, swarming behavior of insects, animals, birds and other living things. Ant Colony Optimization (ACO) is introduced as a nature-based optimizer which based on imitations the behavior of ants. ACO uses simple agents called ants to find suitable solutions to optimization problems in a repetition-based process. Ants can find the shortest route from a food source to the nest using pheromone information. The ants pour pheromones on the ground while walking and follow the path by smelling the pheromone spilled on the ground. If they come to a crossroads on the way to the nest, they choose the path randomly since they have no information about a better way. On average, half of the ants are expected to choose the first path and the other half the second path. Because one path is shorter than the other, more ants pass through it, and more pheromones accumulate on it. After a short time, the number of pheromones on both paths reaches a level that influences the decision of new ants to choose a better path. From then on, newer ants are more likely to prefer the shorter path because they see more pheromones on the shorter path at the decision point. After a short time, all ants will choose this path (Dorigo, Maniezzo & Colorni, 1996). The idea behind the swarm movement of fish and birds led to the design of the famous Particle Swarm Optimization (PSO). Every population member in the PSO who is considered a particle is a candidate solution to the problem. These particles move in the search space according to the two main concepts of the experience of each particle as individual knowledge and the experience of the whole population as collective knowledge. As a result of this strategy, particles will tend to the optimal areas in the search space and will be able to provide an optimal solution to the given problem (Kennedy & Eberhart, 1995). The special potential that exists in the educational space of a classroom has led to the design of Teaching-Learning Based Optimization (TLBO). Simulation of teacher-learner interactions in the two phases of teaching and learning is the main inspiration of TLBO. In the training phase, the best member of the population is assigned as a teacher and the other members of the population are trained by the teacher as students in the class. In the second phase, called the learners phase, students try to improve each other’s situation by sharing information with each other (Rao, Savsani & Vakharia, 2011). Simulating animal behavior and strategy can be a motivator for metaheuristic design. In this regard, the behavior of gray wolves, whose leadership is determined by the introduction of four different types of wolves, has been used in the design of Gray Wolf Optimizer (GWO). The alpha type is the strongest wolf in the herd. Beta and delta are the second and third strongest wolves in the herd, respectively. The omega type also includes other wolves in the herd. The natural behavior of these wolves during hunting is modeled in three phases: prey search, prey siege, and finally prey attack (Mirjalili, Mirjalili & Lewis, 2014). Nutrition intake and hunting methods of humpback whales, known as bubble hunting are applied to design swarm-based Whale Optimization Algorithm (WOA). In this method of hunting, each whale releases air bubbles under the sea and creates walls of rising air in the water. The krill and fish that are inside the aerial wall go to the center of the bubble circle out of fear. The whale is then able to swallow a large number of them by opening its mouth. The humpback whales are able to detect the position of the prey and surround them. Because the optimal position in the search space is uncertain, in the WOA, it is assumed that the best current solution is the target prey, or it is a nearby point (Mirjalili & Lewis, 2016).
Imitation of the behavior of marine predators in the oceans that are able to find and trap prey was the impetus for the development of Marine Predators Algorithm (MPA). In general, most animals in the wild use the random walk strategy effectively to find food. Random walking is a random process in which the next situation depends on the current situation and the probability of moving to the next place, which is mathematically modeled. One of the most popular random walk classes is the Levy flight class, which is used in the design of the MPA to model the movement strategy of marine predators to trap prey (Faramarzi et al., 2020).
Another bioinspired optimization algorithm is Tunicate Swarm Algorithm (TSA) which have been proposed with inspiration of modelling the jet propulsion and the swarm actions of tunicates within the navigation and foraging procedure. An important characteristic of tunicates is their ability to find food sources at sea. This ability is similar to achieving the optimal solution in the search space of an optimization problem. When finding food sources, tunicate behavior is modeled based on three main conditions, namely, (i) avoiding conflicts between tunicates, (ii) moving toward the position of the best tunicate, and (iii) remaining close to the best tunicate, which have been used in the design of the TSA (Kaur et al., 2020). Some of other swarm-based algorithms include Spotted Hyena Optimizer (SHO) (Dhiman & Kumar, 2017), Artificial Ecosystem-based Optimization (AEO) (Zhao, Wang & Zhang, 2020), Cat- and Mouse-based Optimization (CMBO) (Dehghani, Hubálovský & Trojovský, 2021), Artificial Gorilla Troops Optimizer (AGTO) (Abdollahzadeh, Soleimanian Gharehchopogh & Mirjalili, 2021), Horse Herd Optimization Algorithm (HOA) (MiarNaeimi, Azizyan & Rashki, 2021), Aquila Optimizer (AO) (Abualigah et al., 2021), Golden Eagle Optimizer (GEO) (Mohammadi-Balani et al., 2021) and Mutated Leader Algorithm (MLA) (Zeidabadi et al., 2022).
Physics-based techniques have been developed with mathematical modeling of physical phenomena and laws. Gravitational Search Algorithm (GSA) is a physics-based optimizer that uses simulations of Newton’s law of gravitation and the laws of motion on a population of masses. In GSA, each mass portends a solution to the problem. These masses exert force on each other according to the law of gravity according to their distance from each other. Then, based on the modeling of the laws of motion, this population of masses moves toward the optimal areas in the search space (Rashedi, Nezamabadi-Pour & Saryazdi, 2009). Momentum Search Algorithm (MSA) is a physics-based algorithm that uses momentum law modeling and Newtonian laws of motion to design a stochastic optimizer. In the MSA, population members are bullets that are placed in the search space and move according to Newton’s laws of motion based on the momentum that animates them. Given that the momentum applied to the bullets is in the direction of the best solution, after a certain number of repetitions, the bullets converge toward the optimal solution (Dehghani & Samet, 2020). Spring Search Algorithm (SSA) is proposed based on the mathematical modeling of Hooke’s law in a system of springs and weights. In SSA, search agents are weights that apply elastic force to each other based on the springs to which they are attached. A weight that has a better status in the search space pulls other weights toward a better position by springs with more spring constants. In an iterative process, the weights are expected to converge toward the optimal solution (Dehghani et al., 2021; Dehghani et al., 2020e). Some of the other physics-based algorithms are: Flow Direction Algorithm (FDA) (Karami et al., 2021), Simulated Annealing (SA) (Fogel, Owens & Walsh, 1966), Electromagnetic Field Optimization (EFO) (Abedinpourshotorban et al., 2016), Lichtenberg Algorithm (LA) (Pereira et al., 2021), and Archimedes Optimization Algorithm (AOA) (Hashim et al., 2021).
Game-based techniques have been expanded based on simulating player behavior and rules in different games. Ring Toss Game-Based Optimization (RTGBO) is a game-inspired solver method which have been proposed respect to the ring throwing simulations and scoring rules in the ring toss game. In RTGBO, the search agents are the rings that are thrown toward the score bars in optimal areas. During the iterations of the algorithm, the rings converge toward the optimal solution (Doumari et al., 2021a). Hide Object Game Optimization (HOGO) models players’ behavior in finding an object hidden in the game space. From the algorithm’s point of view, the hidden object is the optimal solution that players, as the algorithm population, try to find. The transfer of information between players leads the algorithm to come closer to the optimal solution (Dehghani et al., 2020g). Darts Game Optimization (DGO) (Dehghani et al., 2020f), Tug of War Optimization (TWO) (Kaveh & Zolghadr, 2016), Football Game Based Optimizer (FGBO) (Dehghani et al., 2020a), and Volleyball Premier League (VPL) (Moghdani & Salimifard, 2018) are some of the other game-based algorithms.
Problem Definition and Formulation
An optimization problem is one that has more than one feasible solution. A feasible solution is a solution that is calculated according to the constraints of the problem. The process of selecting the best solution among these feasible solutions is called optimization (Dehghani et al., 2020d). The criterion for selecting the best solution is the objective function value. Different optimization problems in terms of constraints are divided into the following two categories:
(A) Unconstrained optimization problems: The main goal in these problems is to minimize or maximize the objective function without any restrictions on the decision variables.
(B) Constrained optimization problems: In most practical problems, optimization is done according to some constraints. These constraints may exist in the behavior and performance of the system as well as in the physics and geometry of the problem.
Equations representing the constraints may be equality constraints or inequality constraints; in each case, the optimization method is different. However, the constraints determine the acceptable area in the design (Han et al., 2021).
An optimization problem is introduced from a general point of view using three sections: constraints, objective functions, and decision variables (Dhiman et al., 2020). An optimization problem can be modeled mathematically according to Eqs. (1)–(4).
(4) where F(X) is the objective function, hk (X) is the kth inequality constraint, q is the number of equality constraints, gj(X) is the jth inequality constraint, p is the number of inequality constraints, xn is the nth problem variable, lbn is the lower bound, ubn is the upper bound of the nth problem variable and m is the number of problem variables.
The next step in the optimization process, after modeling, is to solve it effectively which can be calculated using optimization solving methods. Optimization algorithms are an effective and efficient stochastic technique which are able to present appropriate solutions to optimization problems. In the next section, ASBO method is introduced and designed.
Average and Subtraction-Based Optimizer
This section introduces the proposed ASBO first and then mathematically models ASBO.
Each optimization problem has a palatable space for problem solutions called the search space. The search space can be visualized as a coordinate system with a number of axes equal to the problem decision variables. Population members move in this search space, aiming to reach the appropriate quasi-optimal solution. The values of the problem decision variables are determined by the position of the ASBO members in the search space. Each member of the population provides information to other members of the population about the situation in which they find themselves. In ASBO, in an iteration-based process, members of the population move to the optimal regions. The main idea in designing the proposed ASBO is to update the position of the population members of the algorithm based on the average information, and subtraction of the worst and best members of the population. After the full implementation of ASBO on the optimization problem, ASBO introduces the best solution obtained during the implementation process as the solution to the problem. The various ASBO steps are listed below:
Step 1: Specify the optimization problem and its information.
Step 2: Specify the parameters of the algorithm.
Step 3: Initial positioning of algorithm population members in the search space.
Step 4: Evaluate all members of the population.
Step 5: Determine the best and worst members of the population.
Step 6: Calculate the average and subtraction of the best and worst members of the population.
Step 7: Update ASBO’s population based on the average information and subtract the best and worst population members.
Step 8: Repeat steps 4 to 7 until the stop condition is reached.
Step 9: The best obtained quasi-optimal solution for the optimization problem is presented.
In ASBO, each population member is a feasible solution to the optimization problem. In fact, each ASBO member is mathematically a vector with the number of elements equal to the number of decision variables, while each element of this vector specifies the value of the variable corresponding to that element. The population members of ASBO are modeled according to Eq. (5).
(5) where, is the candidate solutions of ASBO, is the ith candidate solution of ASBO, is the number of decision variables of given problem, is the number of ASBO members, and is the value of the dth decision variables determined by the ith candidate solution.
Each ASBO searcher member is a potential solution to the given problem. By placing each of these solutions in the decision variables of the problem formula, the objective function is evaluated. This results in a value corresponding to each ASBO member for the objective function. The set of these values are modeled using a vector according to Eq. (6).
(6) where represents the value of the objective function corresponding to the ith member, and denotes the set of these values together as the objective function vector. Comparing the values obtained for the objective function is the main criterion in determining the quality of solutions as well as identifying the worst and best ASBO members.
ASBO employs three different phases in the process of updating the algorithm population with the aim of improving candidate solutions.
In the first phase of ASBO, a member composed of the average of the best and worst members of the population is tasked with updating the ASBO population. This phase of ASBO is simulated based on Eqs. (7)–(9).
(9) where is the average of the worst and best population members, is its objective function value, is the dth dimension of , Xb is the best member of ASBO, Xw is the worst member of ASBO, is the new status of the ith population member based on phase 1, is its objective function value, is the dth dimension of , I is a random number that is equal to 1 or 2, and r is a random number in interval [0, 1].
In the second phase, the position of the population members is updated based on the subtraction information of the best and worst population members. Concepts expressed in the second phase of ASBO is simulated using Eqs. (10)–(12).
(12) where is the subtraction of the worst and best members of ASBO, is the new proposed value of the ith candidate solution based on phase 2, is its objective function value and is the dth dimension of .
(14) where is the new status of the ith population member based on phase 3, is its objective function value, and is the dth dimension of .
After implementing the described three phases of the proposed ASBO, each population member is placed in a new position in the search space. The new status of ASBO members means new candidate values for decision variables, leading to the evaluation of new values for the objective function. Based on the new values, the algorithm enters the next iteration, and the algorithm steps are repeated according to Eqs. (7)–(14) until the implementation of the algorithm is completed. After the complete implementation of ASBO, the best obtained solution during the iterations of the algorithm is introduced as the solution to the problem. The various steps of ASBO are presented as pseudocode in Scheme 1, and as flowcharts in Fig. 1.
In the proposed algorithm, the algorithm population is updated in three different phases. The first phase (which uses average information) and the second phase (which uses subtraction information) move the ASBO members in different areas of the search space and discover new areas. This update process increases the search power and exploration index in the proposed algorithm.
In the third phase, the position of the best member of ASBO is employed to guide the searcher members in the search space. After the algorithm identifies the optimal region based on its exploratory power, moving toward the best member causes the population members to converge toward the optimal solution. During the iteration of ASBO, as the amount of displacement of population members in the first and second phases decreases (because the worst and best members of the population approach each other), the algorithm moves to the best member of the population in smaller steps. This convergence process toward the optimal solution demonstrates the exploitation power of the proposed ASBO in achieving the appropriate solution to the optimization problem.
This section presents the ASBO experimental study on the effective solution of optimization problems and the quality analysis of optimization results vs the global optima. In this regard, seven unimodal test functions, six high dimensional multimodal functions, and 10 fixed dimensional multimodal functions have been employed for ASBO evaluation. Detailed information for these objective functions is specified in Tables 1 to 3 in Appendix A. Stochastic optimization algorithms will be able to succeed in optimization challenges when they have an acceptable power in the global search of problem-solving space to accurately scan different areas and identify the optimal area, as well as the appropriate power in the local search to converge to the global optimum. As a result, a successful optimization process occurs when the optimization algorithm has the right balance between global search and local search. The reason for choosing unimodal functions (including seven F1 to F7 test functions) is that these types of problems, with only one main peak in the search space, are very valuable choices in evaluating the local search of optimization algorithms. The main purpose of optimizing these problems is to analyze the ability of optimization algorithms in converging to the global optima. The choice of high-dimensional multimodal functions (including F8 to F13) is due to the fact that these types of functions, with multiple local optimal areas in the search space, challenge the ability of global search optimization algorithms. The main purpose of optimizing these types of problems is to evaluate the ability of optimization algorithms to cross non-optimal areas and thus identify the main optimal area. The reason for choosing fixed-dimensional multimodal functions (including F14 to F23) is that in this type of problem, it is important to identify the optimal region and domain of converging to the optimal solution simultaneously, which makes them suitable for analyzing the global search and local search capabilities of optimization algorithms. This type of problem evaluates the ability of optimization algorithms to strike the right balance between global search and local search. Additionally, to further analyze the quality of the proposed ASBO, the optimization results obtained are analyzed in comparison with the performance of nine algorithms namely SHO, PSO, TLBO, GA, WOA, TSA, GWO, GSA, and MPA. From the numerous optimization algorithms designed so far, nine methods have been selected for comparison with ASBO. The reason for choosing these nine competing algorithms is that GA and PSO are the best known and most widely used optimization algorithms. GSA, TLBO, and GWO, introduced between 2009 and 2014, have been popular methods for researchers and have been widely cited. WOA and SHO algorithms are among the most widely used techniques introduced in 2016 and 2017. MPA and TSA are recently developed optimizers that have quickly gained the attention of scientists and have been used in a variety of real-world applications. In presenting the optimization results, the criterion ave means the mean of the obtained solutions and the criterion std means the standard deviation of these solutions. In order to calculate these two criteria, Eqs. (15) and (16) have been used.
(16) where Nr is the number of independent implementations and BQSi is the candidate solution in the ith execution.
Table 4 specifies the values set for the parameters of the compared algorithms.
|Selection||Roulette wheel (Proportionate)|
|Crossover||Whole arithmetic (Probability = 0.8, )|
|Mutation||Gaussian (Probability = 0.05)|
|Cognitive and social constant||(C1, C2) = (2, 2)|
|Inertia weight||Linear reduction from 0.9 to 0.1|
|Velocity limit||10% of dimension range|
|Alpha, G0, Rnorm, Rpower||20, 100, 2, 1|
|TF: teaching factor||TF = round|
|random number||rand is a random number between and .|
|Convergence parameter (a)||a: Linear reduction from 2 to 0.|
|Convergence parameter (a)||a: Linear reduction from 2 to 0.|
|r is a random vector in|
|l is a random number in|
|Pmin and Pmax||1, 4|
|c1, c2, c3||random numbers lie in the range of|
|Control Parameter (h)|
|Constant number||P = 0.5|
|Random vector||R is a vector of uniform random numbers in|
|Fish Aggregating Devices (FADs)||FADs = 0.2|
|Binary vector||U = 0 or 1|
The first group of functions that are selected for evaluation the efficiency of optimization algorithms in achieving suitable solutions are of the unimodal type. The optimization results of the F1 to F7 unimodal test functions using nine compared algorithms and the proposed ASBO are presented in Table 5.
Based on the results presented in this table, ASBO presents the global optima for the F1 and F6 functions. In addition, ASBO for F2 to F4 and F7 is the first best optimizer. Relying on the simulation results, it can be stated that ASBO has presented results that are superior and closer to the global optimum which has led to the dramatic superiority of ASBO over nine compared algorithms.
The second group of functions selected to evaluate the performance of optimization algorithms includes six high-dimensional multimodal functions. The ability of the optimization algorithms in solutions providing for F8 to F13 is presented in Table 6.
The optimization results show that ASBO can present the global optimum for the F9 and F11. ASBO is the first-best optimizer for the F10, F12, and F13 functions. In the F8 optimization challenge, GA, TLBO, PSO, and ASBO are ranked first to fourth best optimizers, respectively. Analysis of the optimization results obtained for the F8 to F13 functions shows that ASBO has a higher ability than the nine compared algorithms.
The third group of functions that have been employed in this research to test the ability of optimization algorithms include ten fixed-dimensional multimodal functions. The implementation results of ASBO and the nine compared algorithms on these functions have been reported in Table 7.
What can be seen from the experimental results presented in this table is that ASBO has been able to discover the global optima in solving F14 and F15 with its best performance. ASBO has been able to rank the first best optimizer in solving F16, F19, and F20 in competition with nine compared algorithms. ASBO is also the number one optimizer in solving F17, F18, F21, F22, and F23 due to its smaller std index, regardless of the similarity in the ave index. Analysis of the optimization results of the F14 to F23 functions indicates that the proposed ASBO has a higher ability in providing suitable solutions against the compared algorithms.
The performance of ASBO and the nine compared algorithms are presented as a boxplot in Fig. 2.
The proposed ASBO algorithm completes its optimization operations by scan power of its searcher members in an iteration-based procedure. Therefore, any change in the number of ASBO population members or the number of ASBO iterations affects the output of this algorithm. This issue requires the study of ASBO sensitivity analysis with two parameters N and T. In this regard, a sensitivity analysis to evaluate the performance of ASBO under the influence of these two parameters is presented.
To analyze the sensitivity of ASBO to the N parameter, the algorithm is applied for four different values of N equal to 20, 30, 50 and 80 on solving the F1 to F23 test functions. Table 8 shows the results of ASBO sensitivity analysis to parameter N. The behavior of ASBO convergence curves under the influence of this analysis are shown in Fig. 3.
|Objective functions||Number of population members|
Based on the simulation results, it is obtained that the increase in the number ASBO members has caused the search space to be scanned more accurately and the values of the objective function to be reduced by achieving more quasi-optimal solutions.
In the second study of sensitivity analysis, the performance of ASBO under the influence of changes in T parameter is investigated. In this experiment, ASBO is employed for different T values equal to 100, 500, 800, and 1,000 in solving F1 to F23. Table 9 presents the results of ASBO sensitivity analysis to parameter T. The behavior of ASBO convergence curves on the objective functions under the influence of the change in the maximum number of iterations of the algorithm is plotted in Fig. 4.
|Objective functions||Maximum number of iterations|
What is clear from the simulation results is that increasing the T parameter improves the ASBO performance in converging to the global optima, thus reducing the values of the objective function.
Validation of the efficiency of optimization algorithms based on the mean and standard deviation of their parameters provides valuable information for comparing their performance in optimization. Although unlikely, given that these algorithms are based in part on random population generation, it may happen that one algorithm will be better than the other algorithms only by chance. In this subsection, a statistical analysis of the performance of optimization algorithms is presented to determine whether the superiority of ASBO over competing algorithms is statistically significant. The Wilcoxon rank sum test (Wilcoxon, 1992), which is a non-parametric test, is employed for this purpose. In this analysis, an indicator called -value determines whether the corresponding algorithm has a significant advantage over the alternative algorithm. The results of the Wilcoxon rank sum test with a confidence level of 0.95 are released in Table 10. What can be seen from the simulation results is that ASBO in all cases has a significant superiority over any of the competitor algorithms from a statistical point of view.
|Competitor algorithm||Test function type|
|Unimodal||High-dimensional multimodal||Fixed-dimensional multimodal|
Two valuable concepts that give optimization algorithms the ability to efficiently solve optimization problems are exploitation and exploration. It is by balancing these two concepts that optimization algorithms gain the ability to discover the optimal region and then converge to the optimal solution.
The concept of exploitation in the study of optimization algorithms expresses the ability of an algorithm to exact local search the problem-solving space of the optimization problem. Based on the exploitation concept, each optimization algorithm should be able to scan the neighborhood of the best solution obtained after identifying the optimal area in order to achieve better solutions with careful local search. Exploitation capability is an important feature for optimization algorithms, especially in solving optimization problems which have one main solution without any local optimal areas. The F1 to F7 unimodal test functions with this feature, are appropriate for assessing the exploitation ability of the optimization algorithms. Based on the optimization results of this type of function presented in Table 5, the proposed ASBO converges to solutions very close to the global optimum, and for the F1 and F6 functions even to the global optimum. This indicates the high power of ASBO in exploitation and local search. Analysis of the optimization results obtained from the nine compared algorithms against the results of ASBO indicate that the proposed algorithm has a much higher exploitation index than the compared algorithms.
The concept of exploration for an optimization algorithm is the ability to search different areas of the search space under the heading of global search. Global search allows the algorithm to get out of stopping in limited areas, especially local optimal areas. The exploration ability is an important feature of optimization algorithms, especially in solving problems which have local solutions in different areas of the search space. The F8 to F23 multimodal objective functions with this feature that also have local optimal solutions are proposed to assessment the exploration abilities of the optimization algorithms. According to the optimization results of the multimodal functions which are provided in Tables 3 and 4, it is determined that the proposed ASBO, with accurate scanning of the search space, is able to pass through the optimal local areas and move to the main solution of the objective functions; this holds, especially for the F9, F11, F14, and F15 functions, which have achieved the exact global optimum. Analysis of the performance results of the compared algorithms in optimizing multimodal functions indicates that the proposed ASBO has much higher capabilities in the exploration index and provides more appropriate solutions to optimization problems.
Conclusions and Research Perspectives
In this paper, in order to effectively solve optimization problems, a new metaheuristic stochastic algorithm named Average and Subtraction Based Optimizer (ASBO) is designed. The fundamental idea in ASBO’s design is using the average information, and subtraction of the worst and best members of the population to guide the population to the optimal solution. The various steps of the proposed ASBO were explained, and mathematical model of proposed approach was presented for apply in solving optimization problems. The ASBO’s performance in presenting optimal solutions was tested on twenty-three standard unimodal and multimodal objective functions. The optimization results of the unimodal functions showed the high exploitation power of ASBO in converging toward global optima solution. The optimization results of the multimodal functions indicated the high exploration power of the proposed ASBO in accurate scanning of the search space and providing appropriate quasi-optimal solutions. Also, in order to analyze whether the results obtained from ASBO are significant or not, the proposed approach competed against the performance of nine algorithm, including SHO, PSO, TLBO, GA, WOA, TSA, GWO, GSA, and MPA. The simulation results showed that ASBO is an effective and efficient optimizer in solving optimization problems due to having a proper balance between exploration and exploitation. In addition, ASBO’s superior performance against nine compared algorithms indicated that ASBO was significantly competitive in providing solutions to optimization problems.
The authors would like to propose several research perspectives for future studies, including the design of binary version and multimodal version of the proposed ASBO. In addition, the application of ASBO in solving problems in various sciences and real-world problems are additional possibilities for further studies.
In future research plans, ASBO could be applied to optimize problems, and its results should be analyzed in comparison with other optimization algorithms. As a caveat for ASBO, and all optimization algorithms, there is always the possibility that newer optimization algorithms will be developed that will provide better quasi-optimal solutions.
The abbreviations used in this paper are listed in Table 11.
|ASBO||Average and Subtraction-Based Optimizer|
|PSO||Particle Swarm Optimization|
|WOA||Whale Optimization Algorithm|
|GSA||Gravitational Search Algorithm|
|TSA||Tunicate Swarm Algorithm|
|GWO||Grey Wolf Optimizer|
|MPA||Marine Predators Algorithm|
|SHO||Spotted Hyena Optimizer|
|NFL||No Free Lunch|
|ACO||Ant Colony Optimization|
|AEO||Artificial Ecosystem-based Optimization|
|CMBO||Cat and Mouse based Optimization|
|AGTO||Artificial Gorilla Troops Optimizer|
|HOA||Horse Herd Optimization Algorithm|
|GEO||Golden Eagle Optimizer|
|MLA||Mutated Leader Algorithm|
|MSA||Momentum Search Algorithm|
|SSA||Spring Search Algorithm|
|FDA||Flow Direction Algorithm|
|EFO||Electromagnetic Field Optimization|
|AOA||Archimedes Optimization Algorithm|
|RTGBO||Ring Toss Game-Based Optimization|
|HOGO||Hide Object Game Optimization|
|DGO||Darts Game Optimization|
|VPL||Volleyball Premier League|
|FGBO||Football Game Based Optimizer|
|TWO||Tug of War Optimization|
|17.||[−5, 10] × [0, 15]||2||0.398|
The MATLAB codes of Executing the main file1.
Includes the number of decision variables and the allowable range of variables for each objective function.