Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts

09/01/2018
by   Samuel Carton, et al.
0

We introduce an adversarial method for producing high-recall explanations of neural text classifier decisions. Building on an existing architecture for extractive explanations via hard attention, we add an adversarial layer which scans the residual of the attention for remaining predictive signal. Motivated by the important domain of detecting personal attacks in social media comments, we additionally demonstrate the importance of manually setting a semantically appropriate `default' behavior for the model by explicitly manipulating its bias term. We develop a validation set of human-annotated personal attacks to evaluate the impact of these changes.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset