Federated Learning via Inexact ADMM

04/22/2022
by   Shenglong Zhou, et al.
0

One of the crucial issues in federated learning is how to develop efficient optimization algorithms. Most of the current ones require full devices participation and/or impose strong assumptions for convergence. Different from the widely-used gradient descent-based algorithms, this paper develops an inexact alternating direction method of multipliers (ADMM), which is both computation and communication-efficient, capable of combating the stragglers' effect, and convergent under mild conditions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset