Efficient Continual Adaptation for Generative Adversarial Networks
We present a continual learning approach for generative adversarial networks (GANs), by designing and leveraging parameter-efficient feature map transformations. Our approach is based on learning a set of global and task-specific parameters. The global parameters are fixed across tasks whereas the task specific parameters act as local adapters for each task, and help in efficiently transforming the previous task's feature map to the new task's feature map. Moreover, we propose an element-wise residual bias in the transformed feature space which highly stabilizes GAN training. In contrast to the recent approaches for continual GANs, we do not rely on memory replay, regularization towards previous tasks' parameters, or expensive weight transformations. Through extensive experiments on challenging and diverse datasets, we show that the feature-map transformation based approach outperforms state-of-the-art continual GANs methods, with substantially fewer parameters, and also generates high-quality samples that can be used in generative replay based continual learning of discriminative tasks.
READ FULL TEXT