[This article belongs to Volume - 37, Issue - 12]

A hybrid gray wolf optimization algorithm based on the teaching-learning optimization

In terms of the problems that the gray wolf optimization algorithm has low convergence accuracy and is easy to fall into local solutions, this paper proposes a hybrid gray wolf optimization algorithm based on the teaching-learning optimization. Firstly, the good-point set theory is used to generate the initial population to improve its ergodicity. Then, a nonlinear control parameter strategy is proposed to increase the global search capability in the early stage of the iteration to avoid the algorithm from falling into the local optimum, and increase the local development capability in the later stage of the iteration to improve the convergence accuracy. Finally, combining with teaching-learning-based optimization(TLBO) algorithm and particle swarm optimization(PSO), the original position update formula is modified to optimize the search mode of the algorithm, thereby improving the convergence performance of the algorithm. In order to verify the effectiveness of HGWO algorithm, this paper compares HGWO algorithm with the classical GWO algorithm, other swarm intelligence optimization algorithms and other improved GWO algorithms by using nine well-known benchmark test functions. The results show that the performance of the proposed HGWO algorithm is significantly better than the classical GWO algorithm and other swarm intelligence optimization algorithms, and has certain advantages in the improved algorithms.