Learning Representations of Spatial Displacement through Sensorimotor Prediction
Robots act in their environment through sequences of continuous motor commands. Because of the dimensionality of the motor space, as well as the infinite possible combinations of successive motor commands, agents need compact representations that capture the structure of the resulting displacements. In the case of an autonomous agent with no a priori knowledge about its sensorimotor apparatus, this compression has to be learned. We propose to use Recurrent Neural Networks to encode motor sequences into a compact representation, which is used to predict the consequence of motor sequences in term of sensory changes. We show that sensory prediction can successfully guide the compression of motor sequences into representations that are organized topologically in term of spatial displacement.
READ FULL TEXT