UCB Exploration via Q-Ensembles

06/05/2017
by   Richard Y. Chen, et al.
0

We show how an ensemble of Q^*-functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the Q-learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant gains on the Atari benchmark.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset