Measuring uncertainty when pooling interval-censored data sets with different precision

10/25/2022
by   Krasymyr Tretiak, et al.
0

Data quality is an important consideration in many engineering applications and projects. Data collection procedures do not always involve careful utilization of the most precise instruments and strictest protocols. As a consequence, data are invariably affected by imprecision and sometimes sharply varying levels of quality of the data. Different mathematical representations of imprecision have been suggested, including a classical approach to censored data which is considered optimal when the proposed error model is correct, and a weaker approach called interval statistics based on partial identification that makes fewer assumptions. Maximizing the quality of statistical results is often crucial to the success of many engineering projects, and a natural question that arises is whether data of differing qualities should be pooled together or we should include only precise measurements and disregard imprecise data. Some worry that combining precise and imprecise measurements can depreciate the overall quality of the pooled data. Some fear that excluding data of lesser precision can increase its overall uncertainty about results because lower sample size implies more sampling uncertainty. This paper explores these concerns and describes simulation results that show when it is advisable to combine fairly precise data with rather imprecise data by comparing analyses using different mathematical representations of imprecision. Pooling data sets is preferred when the low-quality data set does not exceed a certain level of uncertainty. However, so long as the data are random, it may be legitimate to reject the low-quality data if its reduction of sampling uncertainty does not counterbalance the effect of its imprecision on the overall uncertainty.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset