FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization
The underlying assumption of recent federated learning (FL) paradigms is that local models usually share the same network architecture as the global model, which becomes impractical for mobile and IoT devices with different setups of hardware and infrastructure. A scalable federated learning framework should address heterogeneous clients equipped with different computation and communication capabilities. To this end, this paper proposes FedHM, a novel federated model compression framework that distributes the heterogeneous low-rank models to clients and then aggregates them into a global full-rank model. Our solution enables the training of heterogeneous local models with varying computational complexities and aggregates a single global model. Furthermore, FedHM not only reduces the computational complexity of the device, but also reduces the communication cost by using low-rank models. Extensive experimental results demonstrate that our proposed outperforms the current pruning-based FL approaches in terms of test Top-1 accuracy (4.6 accuracy gain on average), with smaller model size (1.5x smaller on average) under various heterogeneous FL settings.
READ FULL TEXT