Human Motion Modeling using DVGANs

04/27/2018
by   Xiao Lin, et al.
0

We present a novel generative model for human motion modeling using Generative Adversarial Networks (GANs). We formulate the GAN discriminator using dense validation at each time-scale and perturb the discriminator input to make it translation invariant. Our model is capable of motion generation and completion. We show through our evaluations the resiliency to noise, generalization over actions, and generation of long diverse sequences. We evaluate our approach on Human 3.6M and CMU motion capture datasets using inception scores.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset