Adversarially Robust Medical Classification via Attentive Convolutional Neural Networks

10/26/2022
by   Isaac Wasserman, et al.
0

Convolutional neural network-based medical image classifiers have been shown to be especially susceptible to adversarial examples. Such instabilities are likely to be unacceptable in the future of automated diagnoses. Though statistical adversarial example detection methods have proven to be effective defense mechanisms, additional research is necessary that investigates the fundamental vulnerabilities of deep-learning-based systems and how best to build models that jointly maximize traditional and robust accuracy. This paper presents the inclusion of attention mechanisms in CNN-based medical image classifiers as a reliable and effective strategy for increasing robust accuracy without sacrifice. This method is able to increase robust accuracy by up to 16 in typical adversarial scenarios and up to 2700

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset