Boosting Federated Learning Convergence with Prototype Regularization

07/20/2023
by   Yu Qiao, et al.
0

As a distributed machine learning technique, federated learning (FL) requires clients to collaboratively train a shared model with an edge server without leaking their local data. However, the heterogeneous data distribution among clients often leads to a decrease in model performance. To tackle this issue, this paper introduces a prototype-based regularization strategy to address the heterogeneity in the data distribution. Specifically, the regularization process involves the server aggregating local prototypes from distributed clients to generate a global prototype, which is then sent back to the individual clients to guide their local training. The experimental results on MNIST and Fashion-MNIST show that our proposal achieves improvements of 3.3 and 8.9 baseline FedAvg. Furthermore, our approach has a fast convergence rate in heterogeneous settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset