Skeletal Feature Compensation for Imitation Learning with Embodiment Mismatch

04/15/2021
by   Eddy Hudson, et al.
3

Learning from demonstrations in the wild (e.g. YouTube videos) is a tantalizing goal in imitation learning. However, for this goal to be achieved, imitation learning algorithms must deal with the fact that the demonstrators and learners may have bodies that differ from one another. This condition – "embodiment mismatch" – is ignored by many recent imitation learning algorithms. Our proposed imitation learning technique, SILEM (Skeletal feature compensation for Imitation Learning with Embodiment Mismatch), addresses a particular type of embodiment mismatch by introducing a learned affine transform to compensate for differences in the skeletal features obtained from the learner and expert. We create toy domains based on PyBullet's HalfCheetah and Ant to assess SILEM's benefits for this type of embodiment mismatch. We also provide qualitative and quantitative results on more realistic problems – teaching simulated humanoid agents, including Atlas from Boston Dynamics, to walk by observing human demonstrations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset