Exploration for Multi-task Reinforcement Learning with Deep Generative Models

11/29/2016
by   Sai Praveen Bangaru, et al.
0

Exploration in multi-task reinforcement learning is critical in training agents to deduce the underlying MDP. Many of the existing exploration frameworks such as E^3, R_max, Thompson sampling assume a single stationary MDP and are not suitable for system identification in the multi-task setting. We present a novel method to facilitate exploration in multi-task reinforcement learning using deep generative models. We supplement our method with a low dimensional energy model to learn the underlying MDP distribution and provide a resilient and adaptive exploration signal to the agent. We evaluate our method on a new set of environments and provide intuitive interpretation of our results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset