Fine-grained Private Knowledge Distillation

07/27/2022
by   Shaowei Wang, et al.
0

Knowledge distillation has emerged as a scalable and effective way for privacy-preserving machine learning. One remaining drawback is that it consumes privacy in a model-level (i.e., client-level) manner, every distillation query incurs privacy loss of one client's all records. In order to attain fine-grained privacy accountant and improve utility, this work proposes a model-free reverse k-NN labeling method towards record-level private knowledge distillation, where each record is employed for labeling at most k queries. Theoretically, we provide bounds of labeling error rate under the centralized/local/shuffle model of differential privacy (w.r.t. the number of records per query, privacy budgets). Experimentally, we demonstrate that it achieves new state-of-the-art accuracy with one order of magnitude lower of privacy loss. Specifically, on the CIFAR-10 dataset, it reaches 82.1% test accuracy with centralized privacy budget 1.0; on the MNIST/SVHN dataset, it reaches 99.1%/95.6% accuracy respectively with budget 0.1. It is the first time deep learning with differential privacy achieve comparable accuracy with reasonable data privacy protection (i.e., exp(ϵ)≤ 1.5). Our code is available at https://github.com/liyuntong9/rknn.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset