Fooling a Real Car with Adversarial Traffic Signs

06/30/2019
by   Nir Morgulis, et al.
13

The attacks on the neural-network-based classifiers using adversarial images have gained a lot of attention recently. An adversary can purposely generate an image that is indistinguishable from a innocent image for a human being but is incorrectly classified by the neural networks. The adversarial images do not need to be tuned to a particular architecture of the classifier - an image that fools one network can fool another one with a certain success rate.The published works mostly concentrate on the use of modified image files for attacks against the classifiers trained on the model databases. Although there exists a general understanding that such attacks can be carried in the real world as well, the works considering the real-world attacks are scarce. Moreover, to the best of our knowledge, there have been no reports on the attacks against real production-grade image classification systems.In our work we present a robust pipeline for reproducible production of adversarial traffic signs that can fool a wide range of classifiers, both open-source and production-grade in the real world. The efficiency of the attacks was checked both with the neural-network-based classifiers and legacy computer vision systems. Most of the attacks have been performed in the black-box mode, e.g. the adversarial signs produced for a particular classifier were used to attack a variety of other classifiers. The efficiency was confirmed in drive-by experiments with a production-grade traffic sign recognition systems of a real car.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset