Probing BERT's priors with serial reproduction chains

02/24/2022
by   Takateru Yamakoshi, et al.
0

We can learn as much about language models from what they say as we learn from their performance on targeted benchmarks. Sampling is a promising bottom-up method for probing, but generating samples from successful models like BERT remains challenging. Taking inspiration from theories of iterated learning in cognitive science, we explore the use of serial reproduction chains to probe BERT's priors. Although the masked language modeling objective does not guarantee a consistent joint distribution, we observe that a unique and consistent estimator of the ground-truth joint distribution may be obtained by a GSN sampler, which randomly selects which word to mask and reconstruct on each step. We compare the lexical and syntactic statistics of sentences from the resulting prior distribution against those of the ground-truth corpus distribution and elicit a large empirical sample of naturalness judgments to investigate how, exactly, the model deviates from human speakers. Our findings suggest the need to move beyond top-down evaluation methods toward bottom-up probing to capture the full richness of what has been learned about language.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset