Natural language is an appealing medium for explaining how large languag...
Obtaining human-interpretable explanations of large, general-purpose lan...
Causal abstraction is a promising theoretical framework for explainable
...
Language tasks involving character-level manipulations (e.g., spelling
c...
Explainability methods for NLP systems encounter a version of the fundam...
Humans have the remarkable ability to recognize and acquire novel visual...
The increasing size and complexity of modern ML systems has improved the...
Little is known about what makes cross-lingual transfer hard, since fact...
Distillation efforts have led to language models that are more compact a...
In many areas, we have well-founded insights about causal structure that...
The ability to compositionally map language to referents, relations, and...
There is growing evidence that pretrained language models improve
task-s...
How effectively do we adhere to nudges and interventions that help us co...
BERT, as one of the pretrianed language models, attracts the most attent...
We introduce DynaSent ('Dynamic Sentiment'), a new English-language benc...
Aspect-based sentiment analysis (ABSA) and Targeted ASBA (TABSA) allow
f...
Neural attention, especially the self-attention made popular by the
Tran...
Grounding language in contextual information is crucial for fine-grained...
Human emotions unfold over time, and more affective computing research h...
Word embedding models such as GloVe are widely used in natural language
...
Attention mechanisms in deep neural networks have achieved excellent
per...