Neural Models of the Psychosemantics of `Most'

04/04/2019
by   Lewis O'Sullivan, et al.
0

How are the meanings of linguistic expressions related to their use in concrete cognitive tasks? Visual identification tasks show human speakers can exhibit considerable variation in their understanding, representation and verification of certain quantifiers. This paper initiates an investigation into neural models of these psycho-semantic tasks. We trained two types of network -- a convolutional neural network (CNN) model and a recurrent model of visual attention (RAM) -- on the "most" verification task from Pietroski2009, manipulating the visual scene and novel notions of task duration. Our results qualitatively mirror certain features of human performance (such as sensitivity to the ratio of set sizes, indicating a reliance on approximate number) while differing in interesting ways (such as exhibiting a subtly different pattern for the effect of image type). We conclude by discussing the prospects for using neural models as cognitive models of this and other psychosemantic tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset