Privacy-Preserving Distributed Projection LMS for Linear Multitask Networks
We develop a privacy-preserving distributed projection least mean squares (LMS) strategy over linear multitask networks, where agents' local parameters of interest or tasks are linearly related. Each agent is interested in not only improving its local inference performance via in-network cooperation with neighboring agents, but also protecting its own individual task against privacy leakage. In our proposed strategy, at each time instant, each agent sends a noisy estimate, which is its local intermediate estimate corrupted by a zero-mean additive noise, to its neighboring agents. We derive a sufficient condition to determine the amount of noise to add to each agent's intermediate estimate to achieve an optimal trade-off between the network mean-square-deviation and an inference privacy constraint. We propose a distributed and adaptive strategy to compute the additive noise powers, and study the mean and mean-square behaviors and privacy-preserving performance of the proposed strategy. Simulation results demonstrate that our strategy is able to balance the trade-off between estimation accuracy and privacy preservation.
READ FULL TEXT