How Deep is Your Art: An Experimental Study on the Limits of Artistic Understanding in a Single-Task, Single-Modality Neural Network

03/30/2022
by   Mahan Agha Zahedi, et al.
1

Mathematical modeling and aesthetic rule extraction of works of art are complex activities. This is because art is a multidimensional, subjective discipline. Perception and interpretation of art are, to many extents, relative and open-ended rather than measurable. Following the explainable Artificial Intelligence paradigm, this paper investigated in a human-understandable fashion the limits to which a single-task, single-modality benchmark computer vision model performs in classifying contemporary 2D visual arts. It is important to point out that this work does not introduce an interpreting method to open the black box of Deep Neural Networks, instead it uses existing evaluating metrics derived from the confusion matrix to try to uncover the mechanism with which Deep Neural Networks understand art. To achieve so, VGG-11, pre-trained on ImageNet and discriminatively fine-tuned, was used on handcrafted small-data datasets designed from real-world photography gallery shows. We demonstrated that the artwork's Exhibited Properties or formal factors such as shape and color, rather than Non-Exhibited Properties or content factors such as history and intention, have much higher potential to be the determinant when art pieces have very similar Exhibited Properties. We also showed that a single-task and single-modality model's understanding of art is inadequate as it largely ignores Non-Exhibited Properties.

READ FULL TEXT

page 3

page 7

page 8

page 13

page 15

page 16

page 17

page 19

research
11/27/2019

Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey

Deep Learning is a state-of-the-art technique to make inference on exten...
research
02/02/2018

Visual Interpretability for Deep Learning: a Survey

This paper reviews recent studies in emerging directions of understandin...
research
10/16/2019

Explaining with Impact: A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms

There has been a significant surge of interest recently around the conce...
research
10/16/2019

Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms

There has been a significant surge of interest recently around the conce...
research
01/21/2021

Ikshana: A Theory of Human Scene Understanding Mechanism

In recent years, deep neural networks achieved state-of-the-art performa...
research
04/03/2021

Evaluating Explainable Artificial Intelligence Methods for Multi-label Deep Learning Classification Tasks in Remote Sensing

Although deep neural networks hold the state-of-the-art in several remot...
research
02/04/2021

Computational identification of significant actors in paintings through symbols and attributes

The automatic analysis of fine art paintings presents a number of novel ...

Please sign up or login with your details

Forgot password? Click here to reset