LAMAL: LAnguage Modeling Is All You Need for Lifelong Language Learning

09/07/2019
by   Fan-Keng Sun, et al.
0

Most research on lifelong learning (LLL) applies to images or games, but not language. Here, we introduce LAMAL, a simple yet effective method for LLL based on language modeling. LAMAL replays pseudo samples of previous tasks while requiring no extra memory or model capacity. To be specific, LAMAL is a language model learning to solve the task and generate training samples at the same time. At the beginning of training a new task, the model generates some pseudo samples of previous tasks to train alongside the data of the new task. The results show that LAMAL prevents catastrophic forgetting without any sign of intransigence and can solve up to five very different language tasks sequentially with only one model. Overall, LAMAL outperforms previous methods by a considerable margin and is only 2-3% worse than multitasking which is usually considered as the upper bound of LLL. Our source code is available at https://github.com/xxx.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset