Toward Building Conversational Recommender Systems: A Contextual Bandit Approach

06/04/2019
by   Xiaoying Zhang, et al.
0

Contextual bandit algorithms have gained increasing popularity in recommender systems, because they can learn to adapt recommendations by making exploration-exploitation trade-off. Recommender systems equipped with traditional contextual bandit algorithms are usually trained with behavioral feedback (e.g., clicks) from users on items. The learning speed can be slow because behavioral feedback by nature does not carry sufficient information. As a result, extensive exploration has to be performed. To address the problem, we propose conversational recommendation in which the system occasionally asks questions to the user about her interest. We first generalize contextual bandit to leverage not only behavioral feedback (arm-level feedback), but also verbal feedback (users' interest on categories, topics, etc.). We then propose a new UCB- based algorithm, and theoretically prove that the new algorithm can indeed reduce the amount of exploration in learning. We also design several strategies for asking questions to further optimize the speed of learning. Experiments on synthetic data, Yelp data, and news recommendation data from Toutiao demonstrate the efficacy of the proposed algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset