A No-Free-Lunch Theorem for MultiTask Learning

06/29/2020
by   Steve Hanneke, et al.
0

Multitask learning and related areas such as multi-source domain adaptation address modern settings where datasets from N related distributions {P_t} are to be combined towards improving performance on any single such distribution D. A perplexing fact remains in the evolving theory on the subject: while we would hope for performance bounds that account for the contribution from multiple tasks, the vast majority of analyses result in bounds that improve at best in the number n of samples per task, but most often do not improve in N. As such, it might seem at first that the distributional settings or aggregation procedures considered in such analyses might be somehow unfavorable; however, as we show, the picture happens to be more nuanced, with interestingly hard regimes that might appear otherwise favorable. In particular, we consider a seemingly favorable classification scenario where all tasks P_t share a common optimal classifier h^*, and which can be shown to admit a broad range of regimes with improved oracle rates in terms of N and n. Some of our main results are as follows: ∙ We show that, even though such regimes admit minimax rates accounting for both n and N, no adaptive algorithm exists; that is, without access to distributional information, no algorithm can guarantee rates that improve with large N for n fixed. ∙ With a bit of additional information, namely, a ranking of tasks {P_t} according to their distance to a target D, a simple rank-based procedure can achieve near optimal aggregations of tasks' datasets, despite a search space exponential in N. Interestingly, the optimal aggregation might exclude certain tasks, even though they all share the same h^*.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset