Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise

11/02/2022
by   Jhih-Cing Huang, et al.
0

Recently, quantum classifiers have been known to be vulnerable to adversarial attacks, where quantum classifiers are fooled by imperceptible noises to have misclassification. In this paper, we propose one first theoretical study that utilizing the added quantum random rotation noise can improve the robustness of quantum classifiers against adversarial attacks. We connect the definition of differential privacy and demonstrate the quantum classifier trained with the natural presence of additive noise is differentially private. Lastly, we derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples supported by experimental results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset