Iterative Pre-Conditioning to Expedite the Gradient-Descent Method
Gradient-descent method is one of the most widely used and perhaps the most natural method for solving an unconstrained minimization problem. The method is quite simple and can be implemented easily in distributed settings, which is the focus of this paper. We consider a distributed system of multiple agents where each agent has a local cost function, and the goal for the agents is to minimize the sum of their local cost functions. In principle, the distributed minimization problem can be solved by the agents using the traditional gradient-descent method. However, the convergence rate (or speed) of the gradient-descent method is bounded by the condition number of the minimization problem. Consequentially, when the minimization problem to be solved is ill-conditioned, the gradient-descent method may require a large number of iterations to converge to the solution. Indeed, in many practical situations, the minimization problem that needs to be solved is ill-conditioned. In this paper, we propose an iterative pre-conditioning method that significantly reduces the impact of the conditioning of the minimization problem on the convergence rate of the traditional gradient-descent algorithm. The proposed pre-conditioning method can be implemented with ease in the considered distributed setting. For now, we only consider a special case of the distributed minimization problem where the local cost functions of the agents are linear squared-errors. Besides the theoretical guarantees, the improved convergence due to our pre-conditioning method is also demonstrated through experiments on a real data-set.
READ FULL TEXT