Do disruption index indicators measure what they propose to measure? The comparison of several indicator variants with assessments by peers

11/20/2019
by   Lutz Bornmann, et al.
0

Recently, Wu, Wang, and Evans (2019) and Bu, Waltman, and Huang (2019) proposed a new family of indicators, which measure whether a scientific publication is disruptive to a field or tradition of research. Such disruptive influences are characterized by citations to a focal paper, but not its cited references. In this study, we are interested in the question of convergent validity, i.e., whether these indicators of disruption are able to measure what they propose to measure ('disruptiveness'). We used external criteria of newness to examine convergent validity: in the post-publication peer review system of F1000Prime, experts assess papers whether the reported research fulfills these criteria (e.g., reports new findings). This study is based on 120,179 papers from F1000Prime published between 2000 and 2016. In the first part of the study we discuss the indicators. Based on the insights from the discussion, we propose alternate variants of disruption indicators. In the second part, we investigate the convergent validity of the indicators and the (possibly) improved variants. Although the results of a factor analysis show that the different variants measure similar dimensions, the results of regression analyses reveal that one variant (DI5) performs slightly better than the others.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset