FTFDNet: Learning to Detect Talking Face Video Manipulation with Tri-Modality Interaction

by   Ganglai Wang, et al.

DeepFake based digital facial forgery is threatening public media security, especially when lip manipulation has been used in talking face generation, and the difficulty of fake video detection is further improved. By only changing lip shape to match the given speech, the facial features of identity are hard to be discriminated in such fake talking face videos. Together with the lack of attention on audio stream as the prior knowledge, the detection failure of fake talking face videos also becomes inevitable. It's found that the optical flow of the fake talking face video is disordered especially in the lip region while the optical flow of the real video changes regularly, which means the motion feature from optical flow is useful to capture manipulation cues. In this study, a fake talking face detection network (FTFDNet) is proposed by incorporating visual, audio and motion features using an efficient cross-modal fusion (CMF) module. Furthermore, a novel audio-visual attention mechanism (AVAM) is proposed to discover more informative features, which can be seamlessly integrated into any audio-visual CNN architecture by modularization. With the additional AVAM, the proposed FTFDNet is able to achieve a better detection performance than other state-of-the-art DeepFake video detection methods not only on the established fake talking face detection dataset (FTFDD) but also on the DeepFake video detection datasets (DFDC and DF-TIMIT).


page 1

page 2

page 3

page 4

page 9

page 10


An Audio-Visual Attention Based Multimodal Network for Fake Talking Face Videos Detection

DeepFake based digital facial forgery is threatening the public media se...

VideoFACT: Detecting Video Forgeries Using Attention, Scene Context, and Forensic Traces

Fake videos represent an important misinformation threat. While existing...

Learning Visual Voice Activity Detection with an Automatically Annotated Dataset

Visual voice activity detection (V-VAD) uses visual features to predict ...

Efficient Face Detection with Audio-Based Region Proposals

Robot vision often involves a large computational load due to large imag...

Audio-visual video face hallucination with frequency supervision and cross modality support by speech based lip reading loss

Recently, there has been numerous breakthroughs in face hallucination ta...

A Hybrid CNN-LSTM model for Video Deepfake Detection by Leveraging Optical Flow Features

Deepfakes are the synthesized digital media in order to create ultra-rea...

Watch Those Words: Video Falsification Detection Using Word-Conditioned Facial Motion

In today's era of digital misinformation, we are increasingly faced with...

Please sign up or login with your details

Forgot password? Click here to reset