Active Representation Learning for General Task Space with Applications in Robotics

06/15/2023
by   Yifang Chen, et al.
0

Representation learning based on multi-task pretraining has become a powerful approach in many domains. In particular, task-aware representation learning aims to learn an optimal representation for a specific target task by sampling data from a set of source tasks, while task-agnostic representation learning seeks to learn a universal representation for a class of tasks. In this paper, we propose a general and versatile algorithmic and theoretic framework for active representation learning, where the learner optimally chooses which source tasks to sample from. This framework, along with a tractable meta algorithm, allows most arbitrary target and source task spaces (from discrete to continuous), covers both task-aware and task-agnostic settings, and is compatible with deep representation learning practices. We provide several instantiations under this framework, from bilinear and feature-based nonlinear to general nonlinear cases. In the bilinear case, by leveraging the non-uniform spectrum of the task representation and the calibrated source-target relevance, we prove that the sample complexity to achieve ε-excess risk on target scales with (k^*)^2 v^*_2^2 ε^-2 where k^* is the effective dimension of the target and v^*_2^2 ∈ (0,1] represents the connection between source and target space. Compared to the passive one, this can save up to 1/d_W of sample complexity, where d_W is the task space dimension. Finally, we demonstrate different instantiations of our meta algorithm in synthetic datasets and robotics problems, from pendulum simulations to real-world drone flight datasets. On average, our algorithms outperform baselines by 20%-70%.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset