Probabilistic Jacobian-based Saliency Maps Attacks

07/12/2020
by   António Loison, et al.
9

Machine learning models have achieved spectacular performances in various critical fields including intelligent monitoring, autonomous driving and malware detection. Therefore, robustness against adversarial attacks represents a key issue to trust these models. In particular, the Jacobian-based Saliency Map Attack (JSMA) is widely used to fool neural network classifiers. In this paper, we introduce Weighted JSMA (WJSMA) and Taylor JSMA (TJSMA), simple, faster and more efficient versions of JSMA. These attacks rely upon new saliency maps involving the neural network Jacobian, its output probabilities and the input features. We demonstrate the advantages of WJSMA and TJSMA through two computer vision applications on 1) LeNet-5, a well-known Neural Network classifier (NNC), on the MNIST database and on 2) a more challenging NNC on the CIFAR-10 dataset. We obtain that WJSMA and TJSMA significantly outperform JSMA in success rate, speed and average number of changed features. For instance, on LeNet-5 (with 100% and 99.49% accuracies on the training and test sets), WJSMA and TJSMA respectively exceed 97% and 98.60% in success rate for a maximum authorised distortion of 14.5%, outperforming JSMA with more than 9.5 and 11 percentage points. The new attacks are then used to defend and create more robust models than those trained against JSMA. Like JSMA, our attacks are not scalable on large datasets such as IMAGENET but despite this fact, they remain attractive for relatively small datasets like MNIST, CIFAR-10 and may be potential tools for future applications. Codes are available via the link <https://github.com/probabilistic-jsmas/probabilistic-jsmas>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset