All-but-the-Top: Simple and Effective Postprocessing for Word Representations

02/05/2017
by   Jiaqi Mu, et al.
0

Real-valued word representations have transformed NLP applications, popular examples are word2vec and GloVe, recognized for their ability to capture linguistic regularities. In this paper, we demonstrate a very simple, and yet counter-intuitive, postprocessing technique -- eliminate the common mean vector and a few top dominating directions from the word vectors -- that renders off-the-shelf representations even stronger. The postprocessing is empirically validated on a variety of lexical-level intrinsic tasks (word similarity, concept categorization, word analogy) and sentence-level extrinsic tasks (semantic textual similarity) on multiple datasets and with a variety of representation methods and hyperparameter choices in multiple languages, in each case, the processed representations are consistently better than the original ones. Furthermore, we demonstrate quantitatively in downstream applications that neural network architectures "automatically learn" the postprocessing operation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset