On the Global Convergence of Particle Swarm Optimization Methods

01/29/2022
by   Hui Huang, et al.
0

In this paper we provide a rigorous convergence analysis for the renowned Particle Swarm Optimization method using tools from stochastic calculus and the analysis of partial differential equations. Based on a time-continuous formulation of the particle dynamics as a system of stochastic differential equations, we establish the convergence to a global minimizer in two steps. First, we prove the consensus formation of the dynamics by analyzing the time-evolution of the variance of the particle distribution. Consecutively, we show that this consensus is close to a global minimizer by employing the asymptotic Laplace principle and a tractability condition on the energy landscape of the objective function. Our results allow for the usage of memory mechanisms, and hold for a rich class of objectives provided certain conditions of well-preparation of the hyperparameters and the initial datum are satisfied. To demonstrate the applicability of the method we propose an efficient and parallelizable implementation, which is tested in particular on a competitive and well-understood high-dimensional benchmark problem in machine learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset