Over the years, many researchers have seemingly made the same observatio...
One of the greatest puzzles of all time is how understanding arises from...
Recent studies have shown that language models pretrained and/or fine-tu...
Various efforts in the Natural Language Processing (NLP) community have ...
Understanding the neural basis of language comprehension in the brain ha...
Spoken language understanding (SLU) tasks are usually solved by first
tr...
The global geometry of language models is important for a range of
appli...
Pretrained language models have been shown to encode relational informat...
Neuroscientists evaluate deep neural networks for natural language proce...
Since the popularization of the Transformer as a general-purpose feature...
Coreference resolution and semantic role labeling are NLP tasks that cap...
Large-scale pretrained language models are the major driving force behin...
Recent work on the interpretability of deep neural language models has
c...
Image captioning models are usually evaluated on their ability to descri...
Representational Similarity Analysis (RSA) is a technique developed by
n...
Although the vast majority of knowledge bases KBs are heavily biased tow...
Although the vast majority of knowledge bases KBs are heavily biased tow...
Sequence tagging models for constituent parsing are faster, but less acc...
We investigate the effects of multi-task learning using the recently
int...