A Tale of Two Lexica Testing Computational Hypotheses with Deep Convolutional Neural Networks

04/13/2021
by   Enes Avcu, et al.
0

Gow's (2012) dual lexicon model suggests that the primary purpose of words is to mediate the mappings between acoustic-phonetic input and other forms of linguistic representation. Motivated by evidence from functional imaging, aphasia, and behavioral results, the model argues for the existence of two parallel wordform stores: the dorsal and ventral processing streams. In this paper, we tested the hypothesis that the complex, but systematic mapping between sound and articulation in the dorsal stream poses different computational pressures on feature sets than the more arbitrary mapping between sound and meaning. To test this hypothesis, we created two deep convolutional neural networks (CNNs). While the dorsal network was trained to identify individual spoken words, the ventral network was trained to map them onto semantic classes. We then extracted patterns of network activation from the penultimate level of each network and tested how well features generated by the network supported generalization to linguistic categorization associated with the dorsal versus ventral processing streams. Our preliminary results showed both models successfully learned their tasks. Secondary generalization testing showed the ventral CNN outperformed the dorsal CNN on a semantic task: concreteness classification, while the dorsal CNN outperformed the ventral CNN on articulation tasks: classification by onset phoneme class and syllable length. These results are consistent with the hypothesis that the divergent processing demands of the ventral and dorsal processing streams impose computational pressures for the development of multiple lexica.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset