AdaTask: Adaptive Multitask Online Learning
We introduce and analyze AdaTask, a multitask online learning algorithm that adapts to the unknown structure of the tasks. When the N tasks are stochastically activated, we show that the regret of AdaTask is better, by a factor that can be as large as √(N), than the regret achieved by running N independent algorithms, one for each task. AdaTask can be seen as a comparator-adaptive version of Follow-the-Regularized-Leader with a Mahalanobis norm potential. Through a variational formulation of this potential, our analysis reveals how AdaTask jointly learns the tasks and their structure. Experiments supporting our findings are presented.
READ FULL TEXT