Bayesian Mixed Multidimensional Scaling for Auditory Processing
Speech sounds subtly differ on a multidimensional auditory-perceptual space. Distinguishing speech sound categories is a perceptually demanding task, with large-scale individual differences as well as inter-population (e.g., native versus non-native listeners) heterogeneity. The neural representational differences underlying the inter-individual and cross-language differences are not completely understood. These questions have often been examined using joint analyses that ignore the individual heterogeneity or using separate analyses which cannot characterize human similarities. Neither extremes, therefore, allow for principled comparisons between populations and individuals. Motivated by these problems, we develop a novel Bayesian mixed multidimensional scaling method, taking into account the heterogeneity across populations and subjects. We design a Markov chain Monte Carlo algorithm for posterior computation. We evaluate the method's empirical performances through synthetic experiments. Applied to a motivating auditory neuroscience study, the method provides novel insights into how biologically interpretable lower-dimensional latent features reconstruct the observed distances between the stimuli and vary between individuals and their native language experiences. Supplementary materials for this article, including a standardized description of the materials for reproducing the work, are available as an online supplement.
READ FULL TEXT