No free lunch theorem for security and utility in federated learning

03/11/2022
by   Xiaojin Zhang, et al.
0

In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms. On one hand, private and sensitive training data must be kept secure as much as possible in the presence of semi-honest partners, while on the other hand, a certain amount of information has to be exchanged among different parties for the sake of learning utility. Such a challenge calls for the privacy-preserving federated learning solution, which maximizes the utility of the learned model and maintains a provable privacy guarantee of participating parties' private data. This article illustrates a general framework that a) formulates the trade-off between privacy loss and utility loss from a unified information-theoretic point of view, and b) delineates quantitative bounds of privacy-utility trade-off when different protection mechanisms including Randomization, Sparsity, and Homomorphic Encryption are used. It was shown that in general there is no free lunch for the privacy-utility trade-off and one has to trade the preserving of privacy with a certain degree of degraded utility. The quantitative analysis illustrated in this article may serve as the guidance for the design of practical federated learning algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset