VisQA: Quantifying Information Visualisation Recallability via Question Answering

12/30/2021
by   Yao Wang, et al.
0

Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far. In this work we propose a visual question answering (VQA) paradigm to study visualisation recallability and present VisQA – a novel VQA dataset consisting of 200 visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions from five question types. Furthermore, we present the first computational method to predict recallability of different visualisation elements, such as the title or specific data values. We report detailed analyses of our method on VisQA and demonstrate that it outperforms several baselines in overall recallability and FE-, F-, RV-, and U-question recallability. We further demonstrate one possible application of our method: recommending the visualisation type that maximises user recallability for a given data source. Taken together, our work makes fundamental contributions towards a new generation of methods to assist designers in optimising visualisations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset