An adaptive artificial bee colony algorithm with ranking based selection and UCB-guided strategy learning mechanism
Abstract
Artificial Bee Colony (ABC) algorithms are widely applied to continuous optimisation due to their simplicity and low parameter dependency, but they often suffer from slow convergence, weak exploitation capability, and premature convergence on complex problems. To address these issues, this paper proposes an Adaptive Mechanism-Based Artificial Bee Colony algorithm with ranking-based selection and a UCB-guided strategy learning mechanism (AMABC). A ranking-based selection mechanism is introduced to regulate individual participation in the search process, enhancing the utilisation of high-quality solutions while maintaining population diversity. In the employed and onlooker bee phases, multiple search strategies are formulated as arms in a multi-armed bandit framework, and an Upper Confidence Bound (UCB) policy is adopted to adaptively select strategies based on historical performance. This mechanism enables a dynamic balance between exploration and exploitation. Additionally, an elite-guided local search strategy is incorporated in the later search stage to improve solution refinement and convergence accuracy. Experimental evaluations on 22 standard benchmark functions and the CEC2014 test suite under different dimensional settings demonstrate that AMABC outperforms several representative ABC variants in terms of convergence accuracy, convergence speed, and robustness, particularly on high-dimensional optimisation problems.