MaskedNet: A Pathway for Secure Inference against Power Side-Channel Attacks

10/29/2019
by   Anuj Dubey, et al.
10

Differential Power Analysis (DPA) has been an active area of research for the past two decades to study the attacks for extracting secret information from cryptographic implementations through power measurements and their defenses. Unfortunately, the research on power side-channels have so far predominantly focused on analyzing implementations of ciphers such as AES, DES, RSA, and recently post-quantum cryptography primitives (e.g., lattices). Meanwhile, machine-learning, and in particular deep-learning applications are becoming ubiquitous with several scenarios where the Machine Learning Models are Intellectual Properties requiring confidentiality. The problem of extending side-channel analysis to Machine Learning Model extraction is largely unexplored. This paper extends the DPA framework to neural-network classifiers. First, it shows DPA attacks on classifiers that can extract the secret model parameters such as weights and biases of a neural network. Second, it proposes the first countermeasures against these attacks by augmenting masking. The resulting design uses novel masked components such as masked adder trees for fully-connected layers and masked Rectifier Linear Units for activation functions. On a SAKURA-X FPGA board, experiments show both the insecurity of an unprotected design and the security of our proposed protected design.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset