Why So Many Published Sensitivity Analyses Are False. A Systematic Review of Sensitivity Analysis Practices

11/30/2017
by   Andrea Saltelli, et al.
0

Sensitivity analysis (SA) has much to offer for a very large class of applications, such as model selection, calibration, optimization, quality assurance and many others. Sensitivity analysis offers crucial contextual information regarding a prediction by answering the question "Which uncertain input factors are responsible for the uncertainty in the prediction?" SA is distinct from uncertainty analysis (UA), which instead addresses the question "How uncertain is the prediction?" As we discuss in the present paper much confusion exists in the use of these terms. A proper uncertainty analysis of the output of a mathematical model needs to map what the model does when the input factors are left free to vary over their range of existence. A fortiori, this is true of a sensitivity analysis. Despite this, most UA and SA still explore the input space; moving along mono-dimensional corridors which leave the space of variation of the input factors mostly unscathed. We use results from a bibliometric analysis to show that many published SA fail the elementary requirement to properly explore the space of the input factors. The results, while discipline-dependent, point to a worrying lack of standards and of recognized good practices. The misuse of sensitivity analysis in mathematical modelling is at least as serious as the misuse of the p-test in statistical modelling. Mature methods have existed for about two decades to produce a defensible sensitivity analysis. We end by offering a rough guide for proper use of the methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset