Different Spectral Representations in Optimized Artificial Neural Networks and Brains

08/22/2022
by   Richard C. Gerum, et al.
0

Recent studies suggest that artificial neural networks (ANNs) that match the spectral properties of the mammalian visual cortex – namely, the ∼ 1/n eigenspectrum of the covariance matrix of neural activities – achieve higher object recognition performance and robustness to adversarial attacks than those that do not. To our knowledge, however, no previous work systematically explored how modifying the ANN's spectral properties affects performance. To fill this gap, we performed a systematic search over spectral regularizers, forcing the ANN's eigenspectrum to follow 1/n^α power laws with different exponents α. We found that larger powers (around 2–3) lead to better validation accuracy and more robustness to adversarial attacks on dense networks. This surprising finding applied to both shallow and deep networks and it overturns the notion that the brain-like spectrum (corresponding to α∼ 1) always optimizes ANN performance and/or robustness. For convolutional networks, the best α values depend on the task complexity and evaluation metric: lower α values optimized validation accuracy and robustness to adversarial attack for networks performing a simple object recognition task (categorizing MNIST images of handwritten digits); for a more complex task (categorizing CIFAR-10 natural images), we found that lower α values optimized validation accuracy whereas higher α values optimized adversarial robustness. These results have two main implications. First, they cast doubt on the notion that brain-like spectral properties (α∼ 1) always optimize ANN performance. Second, they demonstrate the potential for fine-tuned spectral regularizers to optimize a chosen design metric, i.e., accuracy and/or robustness.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset