Computational extraction of metrics and normative data on the alternative uses test on a set of 420 household objects

08/16/2021
by   Faheem Zunjani, et al.
0

The Alternative Uses Test (AUT) is a classical test which has long been used in the investigation of creativity and divergent thinking. Performance rating on this test is usually done using ad hoc manual assessments on a subset of certain established metrics – like Fluency, Flexibility and Originality. Ad hoc performance rating brings however multiple disadvantages, besides a high manual work requirement: (a) different objects sets ares used in different studies, which makes cross comparability hard; (b) not all metrics used by a study may be used by another, thus meta-analyses are hard to perform and (c) the rating on certain metric types may be biased by the creativity of the group it is given to – the Originality metric, for example, is rated as a percentage of how often an answer was produced by group participants. The measurement of the AUT performance would gain in scientific rigor if it could rely on a set of normative data, and a computational treatment of at least part of the core metrics. In this paper we report gathering data on uses human participants come up with for a large set of 420 household objects. A computational treatment of these answers is developed, and core metrics for this data set are extracted. A new computational metric – Order Rank – is developed, to provide further precision in understanding and analysing creative answers. The resulting dataset and metrics are made available via an interface for other researchers. This provides the largest set of objects with answers and metrics to date. In a subsequent study, a larger amount of data is gathered for a 32 items subset of objects. The computational treatment of this data provides a strong normative dataset for the Alternative Uses Test.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset