Algorithms and Improved bounds for online learning under finite hypothesis class

03/24/2019
by   Ankit Sharma, et al.
0

Online learning is the process of answering a sequence of questions based on the correct answers to the previous questions. It is studied in many research areas such as game theory, information theory and machine learning. There are two main components of online learning framework. First, the learning algorithm also known as the learner and second, the hypothesis class which is essentially a set of functions which learner uses to predict answers to the questions. Sometimes, this class contains some functions which have the capability to provide correct answers to the entire sequence of questions. This case is called realizable case. And when hypothesis class does not contain such functions is called unrealizable case. The goal of the learner, in both the cases, is to make as few mistakes as that could have been made by most powerful functions in hypothesis class over the entire sequence of questions. Performance of the learners is analysed by theoretical bounds on the number of mistakes made by them. This paper proposes three algorithms to improve the mistakes bound in the unrealizable case. Proposed algorithms perform highly better than the existing ones in the long run when most of the input sequences presented to the learner are likely to be realizable.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset