Accelerating Federated Learning via Sampling Anchor Clients with Large Batches

06/13/2022
by   Feijie Wu, et al.
0

Using large batches in recent federated learning studies has improved convergence rates, but it requires additional computation overhead compared to using small batches. To overcome this limitation, we propose a unified framework FedAMD, which disjoints the participants into anchor and miner groups based on time-varying probabilities. Each client in the anchor group computes the gradient using a large batch, which is regarded as its bullseye. Clients in the miner group perform multiple local updates using serial mini-batches, and each local update is also indirectly regulated by the global target derived from the average of clients' bullseyes. As a result, the miner group follows a near-optimal update towards the global minimizer, adapted to update the global model. Measured by ϵ-approximation, FedAMD achieves a convergence rate of O(1/ϵ) under non-convex objectives by sampling an anchor with a constant probability. The theoretical result considerably surpasses the state-of-the-art algorithm BVR-L-SGD at O(1/ϵ^3/2), while FedAMD reduces at least O(1/ϵ) communication overhead. Empirical studies on real-world datasets validate the effectiveness of FedAMD and demonstrate the superiority of our proposed algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset