Updater-Extractor Architecture for Inductive World State Representations

04/12/2021
by   Arseny Moskvichev, et al.
0

Developing NLP models traditionally involves two stages - training and application. Retention of information acquired after training (at application time) is architecturally limited by the size of the model's context window (in the case of transformers), or by the practical difficulties associated with long sequences (in the case of RNNs). In this paper, we propose a novel transformer-based Updater-Extractor architecture and a training procedure that can work with sequences of arbitrary length and refine its knowledge about the world based on linguistic inputs. We explicitly train the model to incorporate incoming information into its world state representation, obtaining strong inductive generalization and the ability to handle extremely long-range dependencies. We prove a lemma that provides a theoretical basis for our approach. The result also provides insight into success and failure modes of models trained with variants of Truncated Back-Propagation Through Time (such as Transformer XL). Empirically, we investigate the model performance on three different tasks, demonstrating its promise. This preprint is still a work in progress. At present, we focused on easily interpretable tasks, leaving the application of the proposed ideas to practical NLP applications for the future.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset