Examining Racial Bias in an Online Abuse Corpus with Structural Topic Modeling

05/26/2020
by   Thomas Davidson, et al.
0

We use structural topic modeling to examine racial bias in data collected to train models to detect hate speech and abusive language in social media posts. We augment the abusive language dataset by adding an additional feature indicating the predicted probability of the tweet being written in African-American English. We then use structural topic modeling to examine the content of the tweets and how the prevalence of different topics is related to both abusiveness annotation and dialect prediction. We find that certain topics are disproportionately racialized and considered abusive. We discuss how topic modeling may be a useful approach for identifying bias in annotated data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2019

Racial Bias in Hate Speech and Abusive Language Detection Datasets

Technologies for abusive language detection are being developed and appl...
research
03/17/2022

Short Text Topic Modeling: Application to tweets about Bitcoin

Understanding the semantic of a collection of texts is a challenging tas...
research
01/30/2023

A Human Word Association based model for topic detection in social networks

With the widespread use of social networks, detecting the topics discuss...
research
10/14/2020

On Cross-Dataset Generalization in Automatic Detection of Online Abuse

NLP research has attained high performances in abusive language detectio...
research
04/06/2018

Identifying Topics from Micropost Collections using Linked Open Data

The extensive use of social media for sharing and obtaining information ...
research
02/20/2023

Persian topic detection based on Human Word association and graph embedding

In this paper, we propose a framework to detect topics in social media b...

Please sign up or login with your details

Forgot password? Click here to reset