Disrupting Deepfakes with an Adversarial Attack that Survives Training

06/17/2020
by   Eran Segalis, et al.
0

The rapid progress in generative models and autoencoders has given rise to effective video tampering techniques, used for generating deepfakes. Mitigation research is mostly focused on post-factum deepfake detection and not prevention. We complement these efforts by proposing a prevention technique against face-swapping autoencoders. Our technique consists of a novel training-resistant adversarial attack that can be applied to a video to disrupt face-swapping manipulations. Our attack introduces spatial-temporal distortions to the output of the face-swapping autoencoders, and it holds whether or not our adversarial images have been included in the training set of said autoencoders. To implement the attack, we construct a bilevel optimization problem, where we train a generator and a face-swapping model instance against each other. Specifically, we pair each input image with a target distortion, and feed them into a generator that produces an adversarial image. This image will exhibit the distortion when a face-swapping autoencoder is applied to it. We solve the optimization problem by training the generator and the face-swapping model simultaneously using an iterative process of alternating optimization. Finally, we validate our attack using a popular implementation of FaceSwap, and show that our attack transfers across different models and target faces. More broadly, these results demonstrate the existence of training-resistant adversarial attacks, potentially applicable to a wide range of domains.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset