Egocentric affordance detection with the one-shot geometry-driven Interaction Tensor

06/13/2019
by   Eduardo Ruiz, et al.
0

In this abstract we describe recent [4,7] and latest work on the determination of affordances in visually perceived 3D scenes. Our method builds on the hypothesis that geometry on its own provides enough information to enable the detection of significant interaction possibilities in the environment. The motivation behind this is that geometric information is intimately related to the physical interactions afforded by objects in the world. The approach uses a generic representation for the interaction between everyday objects such as a mug or an umbrella with the environment, and also for more complex affordances such as humans Sitting or Riding a motorcycle. Experiments with synthetic and real RGB-D scenes show that the representation enables the prediction of affordance candidate locations in novel environments at fast rates and from a single (one-shot) training example. The determination of affordances is a crucial step towards systems that need to perceive and interact with their surroundings. We here illustrate output on two cases for a simulated robot and for an Augmented Reality setting, both perceiving in an egocentric manner.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset