Ptolemy: Architecture Support for Robust Deep Learning

08/23/2020
by   Yiming Gan, et al.
0

Deep learning is vulnerable to adversarial attacks, where carefully-crafted input perturbations could mislead a well-trained Deep Neural Network to produce incorrect results. Today's countermeasures to adversarial attacks either do not have capability to detect adversarial samples at inference time, or introduce prohibitively high overhead to be practical at inference time. We propose Ptolemy, an algorithm-architecture co-designed system that detects adversarial attacks at inference time with low overhead and high accuracy.We exploit the synergies between DNN inference and imperative program execution: an input to a DNN uniquely activates a set of neurons that contribute significantly to the inference output, analogous to the sequence of basic blocks exercised by an input in a conventional program. Critically, we observe that adversarial samples tend to activate distinctive paths from those of benign inputs. Leveraging this insight, we propose an adversarial sample detection framework, which uses canary paths generated from offline profiling to detect adversarial samples at runtime. The Ptolemy compiler along with the co-designed hardware enable efficient execution by exploiting the unique algorithmic characteristics. Extensive evaluations show that Ptolemy achieves higher or similar adversarial example detection accuracy than today's mechanisms with a much lower runtime (as low as 2

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset