Split Federated Learning: Speed up Model Training in Resource-Limited Wireless Networks
In this paper, we propose a novel distributed learning scheme, named group-based split federated learning (GSFL), to speed up artificial intelligence (AI) model training. Specifically, the GSFL operates in a split-then-federated manner, which consists of three steps: 1) Model distribution, in which the access point (AP) splits the AI models and distributes the client-side models to clients; 2) Model training, in which each client executes forward propagation and transmit the smashed data to the edge server. The edge server executes forward and backward propagation and then returns the gradient to the clients for updating local client-side models; and 3) Model aggregation, in which edge servers aggregate the server-side and client-side models. Simulation results show that the GSFL outperforms vanilla split learning and federated learning schemes in terms of overall training latency while achieving satisfactory accuracy.
READ FULL TEXT