Multimodal Semantic Simulations of Linguistically Underspecified Motion Events

10/03/2016
by   Nikhil Krishnaswamy, et al.
0

In this paper, we describe a system for generating three-dimensional visual simulations of natural language motion expressions. We use a rich formal model of events and their participants to generate simulations that satisfy the minimal constraints entailed by the associated utterance, relying on semantic knowledge of physical objects and motion events. This paper outlines technical considerations and discusses implementing the aforementioned semantic models into such a system.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset