LSH methods for data deduplication in a Wikipedia artificial dataset

12/10/2021
by   Juan Ciro, et al.
0

This paper illustrates locality sensitive hasing (LSH) models for the identification and removal of nearly redundant data in a text dataset. To evaluate the different models, we create an artificial dataset for data deduplication using English Wikipedia articles. Area-Under-Curve (AUC) over 0.9 were observed for most models, with the best model reaching 0.96. Deduplication enables more effective model training by preventing the model from learning a distribution that differs from the real one as a result of the repeated data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset