Exploring the Value of Personalized Word Embeddings

11/11/2020
by   Charles Welch, et al.
4

In this paper, we introduce personalized word embeddings, and examine their value for language modeling. We compare the performance of our proposed prediction model when using personalized versus generic word representations, and study how these representations can be leveraged for improved performance. We provide insight into what types of words can be more accurately predicted when building personalized models. Our results show that a subset of words belonging to specific psycholinguistic categories tend to vary more in their representations across users and that combining generic and personalized word embeddings yields the best performance, with a 4.7 perplexity. Additionally, we show that a language model using personalized word embeddings can be effectively used for authorship attribution.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset