Evaluation Evaluation a Monte Carlo study

04/03/2015
by   David M. W. Powers, et al.
0

Over the last decade there has been increasing concern about the biases embodied in traditional evaluation methods for Natural Language Processing/Learning, particularly methods borrowed from Information Retrieval. Without knowledge of the Bias and Prevalence of the contingency being tested, or equivalently the expectation due to chance, the simple conditional probabilities Recall, Precision and Accuracy are not meaningful as evaluation measures, either individually or in combinations such as F-factor. The existence of bias in NLP measures leads to the 'improvement' of systems by increasing their bias, such as the practice of improving tagging and parsing scores by using most common value (e.g. water is always a Noun) rather than the attempting to discover the correct one. The measures Cohen Kappa and Powers Informedness are discussed as unbiased alternative to Recall and related to the psychologically significant measure DeltaP. In this paper we will analyze both biased and unbiased measures theoretically, characterizing the precise relationship between all these measures as well as evaluating the evaluation measures themselves empirically using a Monte Carlo simulation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset