We study feature interactions in the context of feature attribution meth...
Self-attention weights and their transformed variants have been the main...
As Natural Language Processing (NLP) technology rapidly develops and spr...
Detecting and mitigating harmful biases in modern language models are wi...
We investigate the extent to which modern, neural language models are
su...
Interpreting the inner workings of neural models is a key step in ensuri...
Having the right inductive biases can be crucial in many tasks or scenar...
In the Transformer model, "self-attention" combines information from att...
Extensive research has recently shown that recurrent neural language mod...
In this paper, we define and apply representational stability analysis
(...
Can neural nets learn logic? We approach this classic question with curr...
Human language, music and a variety of animal vocalisations constitute w...
How do neural language models keep track of number agreement between sub...
We investigate how neural networks can learn and process languages with
...
We evaluate 8 different word embedding models on their usefulness for
pr...
Recursive neural networks (RNN) and their recently proposed extension
re...
We present a self-training approach to unsupervised dependency parsing t...
We are proposing an extension of the recursive neural network that makes...