A Diagnostic Approach to Assess the Quality of Data Splitting in Machine Learning
In machine learning, a routine practice is to split the data into a training and a test data set. A proposed model is built based on the training data, and then the performance of the model is assessed using test data. Usually, the data is split randomly into a training and a test set on an ad hoc basis. This approach, pivoted on random splitting, works well but more often than not, it fails to gauge the generalizing capability of the model with respect to perturbations in the input of training and test data. Experimentally, this sensitive aspect of randomness in the input data is realized when a new iteration of a fixed pipeline, from model building to training and testing, is executed, and an overly optimistic performance estimate is reported. Since the consistency in a model's performance predominantly depends on the data splitting, any conclusions on the robustness of the model are unreliable in such a scenario. We propose a diagnostic approach to quantitatively assess the quality of a given split in terms of its true randomness, and provide a basis for inferring model insensitivity towards the input data. We associate model robustness with random splitting using a self-defined data-driven distance metric based on the Mahalanobis squared distance between a train set and its corresponding test set. The probability distribution of the distance metric is simulated using Monte Carlo simulations, and a threshold is calculated from one-sided hypothesis testing. We motivate and showcase the performance of the proposed approach using various real data sets. We also compare the performance of the existing data splitting methods using the proposed method.
READ FULL TEXT