VerifyML: Obliviously Checking Model Fairness Resilient to Malicious Model Holder

10/16/2022
by   Guowen Xu, et al.
0

In this paper, we present VerifyML, the first secure inference framework to check the fairness degree of a given Machine learning (ML) model. VerifyML is generic and is immune to any obstruction by the malicious model holder during the verification process. We rely on secure two-party computation (2PC) technology to implement VerifyML, and carefully customize a series of optimization methods to boost its performance for both linear and nonlinear layer execution. Specifically, (1) VerifyML allows the vast majority of the overhead to be performed offline, thus meeting the low latency requirements for online inference. (2) To speed up offline preparation, we first design novel homomorphic parallel computing techniques to accelerate the authenticated Beaver's triple (including matrix-vector and convolution triples) generation procedure. It achieves up to 1.7× computation speedup and gains at least 10.7× less communication overhead compared to state-of-the-art work. (3) We also present a new cryptographic protocol to evaluate the activation functions of non-linear layers, which is 4×–42× faster and has >48× lesser communication than existing 2PC protocol against malicious parties. In fact, VerifyML even beats the state-of-the-art semi-honest ML secure inference system! We provide formal theoretical analysis for VerifyML security and demonstrate its performance superiority on mainstream ML models including ResNet-18 and LeNet.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset