SwiftAgg: Communication-Efficient and Dropout-Resistant Secure Aggregation for Federated Learning with Worst-Case Security Guarantees
We propose SwiftAgg, a novel secure aggregation protocol for federated learning systems, where a central server aggregates local models of N distributed users, each of size L, trained on their local data, in a privacy-preserving manner. Compared with state-of-the-art secure aggregation protocols, SwiftAgg significantly reduces the communication overheads without any compromise on security. Specifically, in presence of at most D dropout users, SwiftAgg achieves a users-to-server communication load of (T+1)L and a users-to-users communication load of up to (N-1)(T+D+1)L, with a worst-case information-theoretic security guarantee, against any subset of up to T semi-honest users who may also collude with the curious server. The key idea of SwiftAgg is to partition the users into groups of size D+T+1, then in the first phase, secret sharing and aggregation of the individual models are performed within each group, and then in the second phase, model aggregation is performed on D+T+1 sequences of users across the groups. If a user in a sequence drops out in the second phase, the rest of the sequence remain silent. This design allows only a subset of users to communicate with each other, and only the users in a single group to directly communicate with the server, eliminating the requirements of 1) all-to-all communication network across users; and 2) all users communicating with the server, for other secure aggregation protocols. This helps to substantially slash the communication costs of the system.
READ FULL TEXT