DetectX – Adversarial Input Detection using Current Signatures in Memristive XBar Arrays

by   Abhishek Moitra, et al.

Adversarial input detection has emerged as a prominent technique to harden Deep Neural Networks(DNNs) against adversarial attacks. Most prior works use neural network-based detectors or complex statistical analysis for adversarial detection. These approaches are computationally intensive and vulnerable to adversarial attacks. To this end, we propose DetectX - a hardware friendly adversarial detection mechanism using hardware signatures like Sum of column Currents (SoI) in memristive crossbars (XBar). We show that adversarial inputs have higher SoI compared to clean inputs. However, the difference is too small for reliable adversarial detection. Hence, we propose a dual-phase training methodology: Phase1 training is geared towards increasing the separation between clean and adversarial SoIs; Phase2 training improves the overall robustness against different strengths of adversarial attacks. For hardware-based adversarial detection, we implement the DetectX module using 32nm CMOS circuits and integrate it with a Neurosim-like analog crossbar architecture. We perform hardware evaluation of the Neurosim+DetectX system on the Neurosim platform using datasets-CIFAR10(VGG8), CIFAR100(VGG16) and TinyImagenet(ResNet18). Our experiments show that DetectX is 10x-25x more energy efficient and immune to dynamic adversarial attacks compared to previous state-of-the-art works. Moreover, we achieve high detection performance (ROC-AUC > 0.95) for strong white-box and black-box attacks. The code has been released at


page 1

page 8

page 10


Enhanced countering adversarial attacks via input denoising and feature restoring

Despite the fact that deep neural networks (DNNs) have achieved prominen...

Adversarial Detection without Model Information

Most prior state-of-the-art adversarial detection works assume that the ...

KATANA: Simple Post-Training Robustness Using Test Time Augmentations

Although Deep Neural Networks (DNNs) achieve excellent performance on ma...

DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks

Spiking Neural Networks (SNNs), despite being energy-efficient when impl...

Robustness Hidden in Plain Sight: Can Analog Computing Defend Against Adversarial Attacks?

The ever-increasing computational demand of Deep Learning has propelled ...

Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks

Recent studies have shown that deep learning models are vulnerable to sp...

DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning

DNNs are known to be vulnerable to so-called adversarial attacks that ma...

Please sign up or login with your details

Forgot password? Click here to reset