Interactive Trajectory Adaptation through Force-guided Bayesian Optimization
Flexible manufacturing processes demand robots to easily adapt to changes in the environment and interact with humans. In such dynamic scenarios, robotic tasks may be programmed through learning-from-demonstration approaches, where a nominal plan of the task is learned by the robot. However, the learned plan may need to be adapted in order to fulfill additional requirements or overcome unexpected environment changes. When the required adaptation occurs at the end-effector trajectory level, a human operator may want to intuitively show the robot the desired changes by physically interacting with it. In this scenario, the robot needs to understand the human intended changes from noisy haptic data, quickly adapt accordingly and execute the nominal task plan when no further adaptation is needed. This paper addresses the aforementioned challenges by leveraging LfD and Bayesian optimization to endow the robot with data-efficient adaptation capabilities. Our approach exploits the sensed interaction forces to guide the robot adaptation, and speeds up the optimization process by defining local search spaces extracted from the learned task model. We show how our framework quickly adapts the learned spatial-temporal patterns of the task, leading to deformed trajectory distributions that are consistent with the nominal plan and the changes introduced by the human.
READ FULL TEXT