Deep adversarial attack on target detection systems

08/12/2021
by   Uche M. Osahor, et al.
0

Target detection systems identify targets by localizing their coordinates on the input image of interest. This is ideally achieved by labeling each pixel in an image as a background or a potential target pixel. Deep Convolutional Neural Network (DCNN) classifiers have proven to be successful tools for computer vision applications. However,prior research confirms that even state of the art classifier models are susceptible to adversarial attacks. In this paper, we show how to generate adversarial infrared images by adding small perturbations to the targets region to deceive a DCNN-based target detector at remarkable levels. We demonstrate significant progress in developing visually imperceptible adversarial infrared images where the targets are visually recognizable by an expert but a DCNN-based target detector cannot detect the targets in the image.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset