Image Obfuscation for Privacy-Preserving Machine Learning

10/20/2020
by   Mathilde Raynal, et al.
0

Privacy becomes a crucial issue when outsourcing the training of machine learning (ML) models to cloud-based platforms offering machine-learning services. While solutions based on cryptographic primitives have been developed, they incur a significant loss in accuracy or training efficiency, and require modifications to the backend architecture. A key challenge we tackle in this paper is the design of image obfuscation schemes that provide enough privacy without significantly degrading the accuracy of the ML model and the efficiency of the training process. In this endeavor, we address another challenge that has persisted so far: quantifying the degree of privacy provided by visual obfuscation mechanisms. We compare the ability of state-of-the-art full-reference quality metrics to concur with human subjects in terms of the degree of obfuscation introduced by a range of techniques. By relying on user surveys and two image datasets, we show that two existing image quality metrics are also well suited to measure the level of privacy in accordance with human subjects as well as AI-based recognition, and can therefore be used for quantifying privacy resulting from obfuscation. With the ability to quantify privacy, we show that we can provide adequate privacy protection to the training image set at the cost of only a few percentage points loss in accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset