Actor-Critic based Training Framework for Abstractive Summarization

03/28/2018
by   Piji Li, et al.
0

We present a training framework for neural abstractive summarization based on actor-critic approaches from reinforcement learning. In the traditional neural network based methods, the objective is only to maximize the likelihood of the predicted summaries, no other assessment constraints are considered, which may generate low-quality summaries or even incorrect sentences. To alleviate this problem, we employ an actor-critic framework to enhance the training procedure. For the actor, we employ the typical attention based sequence-to-sequence (seq2seq) framework as the policy network for summary generation. For the critic, we combine the maximum likelihood estimator with a well designed global summary quality estimator which is a neural network based binary classifier aiming to make the generated summaries indistinguishable from the human-written ones. Policy gradient method is used to conduct the parameter learning. An alternating training strategy is proposed to conduct the joint training of the actor and critic models. Extensive experiments on some benchmark datasets in different languages show that our framework achieves improvements over the state-of-the-art methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset