Language and Visual Entity Relationship Graph for Agent Navigation

10/19/2020
by   Yicong Hong, et al.
0

Vision-and-Language Navigation (VLN) requires an agent to navigate in a real-world environment following natural language instructions. From both the textual and visual perspectives, we find that the relationships among the scene, its objects,and directional clues are essential for the agent to interpret complex instructions and correctly perceive the environment. To capture and utilize the relationships, we propose a novel Language and Visual Entity Relationship Graph for modelling the inter-modal relationships between text and vision, and the intra-modal relationships among visual entities. We propose a message passing algorithm for propagating information between language elements and visual entities in the graph, which we then combine to determine the next action to take. Experiments show that by taking advantage of the relationships we are able to improve over state-of-the-art. On the Room-to-Room (R2R) benchmark, our method achieves the new best performance on the test unseen split with success rate weighted by path length (SPL) of 52 On the Room-for-Room (R4R) dataset, our method significantly improves the previous best from 13 time warping (SDTW). Code is available at: https://github.com/YicongHong/Entity-Graph-VLN.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset