Detecting cognitive impairments by agreeing on interpretations of linguistic features
Linguistic features have shown promising applications for detecting various cognitive impairments. To improve detection accuracies, increasing the amount of data or linguistic features have been two applicable approaches. However, acquiring additional clinical data could be expensive, and hand-carving features are burdensome. In this paper, we take a third approach, putting forward Consensus Networks (CN), a framework to diagnose after reaching agreements between modalities. We divide the linguistic features into non-overlapping subsets according to their natural categories, let neural networks ("ePhysicians") learn low-dimensional representations ("interpretation vectors") that agree with each other. These representations are passed into a neural network classifier, resulting in a framework for assessing cognitive impairments. In this paper, we also present methods that empirically improve the performance of CN. Namely, the addition of a noise modality and allowing gradients to propagate to interpreters while optimizing the classifier. We then present two ablation studies to illustrate the effectiveness of CN: dividing subsets in the natural modalities is more beneficial than doing so randomly, and that models built with consensus settings outperform those without given the same modalities of features. To understand further what happens in consensus networks, we visualize the interpretation vectors during training procedures. They demonstrate symmetry in an aggregate manner. Overall, using all of the 413 linguistic features, our models significantly outperform traditional classifiers, which are used by the state-of-the-art papers.
READ FULL TEXT