Reinforced Meta Active Learning
In stream-based active learning, the learning procedure typically has access to a stream of unlabeled data instances and must decide for each instance whether to label it and use it for training or to discard it. There are numerous active learning strategies which try to minimize the number of labeled samples required for training in this setting by identifying and retaining the most informative data samples. Most of these schemes are rule-based and rely on the notion of uncertainty, which captures how small the distance of a data sample is from the classifier's decision boundary. Recently, there have been some attempts to learn optimal selection strategies directly from the data, but many of them are still lacking generality for several reasons: 1) They focus on specific classification setups, 2) They rely on rule-based metrics, 3) They require offline pre-training of the active learner on related tasks. In this work we address the above limitations and present an online stream-based meta active learning method which learns on the fly an informativeness measure directly from the data, and is applicable to a general class of classification problems without any need for pretraining of the active learner on related tasks. The method is based on reinforcement learning and combines episodic policy search and a contextual bandits approach which are used to train the active learner in conjunction with training of the model. We demonstrate on several real datasets that this method learns to select training samples more efficiently than existing state-of-the-art methods.
READ FULL TEXT