Learning Deep Policies for Physics-Based Manipulation in Clutter

03/21/2018
by   Wissam Bejjani, et al.
0

Uncertainty in modeling real world physics makes transferring traditional open-loop motion planning techniques from simulation to the real world particularly challenging. Available closed-loop policy learning approaches, for physics-based manipulation tasks, typically either focus on single object manipulation, or rely on imitation learning, which inherently constrains task generalization and performance to the available demonstrations. In this work, we propose an approach to learn a policy for physics-based manipulation in clutter, which enables the robot to react to the uncertain dynamics of the real world. We start with presenting an imitation learning technique which compiles demonstrations from a sampling-based planner into an action-value function encoded as a deep neural network. We then use the learned action-value function to guide a look-ahead planner, giving us a control policy. Lastly, we propose to refine the deep action-value function through reinforcement learning, taking advantage of the look-ahead planner. We evaluate our approach in a physics-enabled simulation environment with artificially injected uncertainty, as well as in a real world task of manipulation in clutter.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset