Multi-Stage Multi-Task Feature Learning

10/22/2012
by   Pinghua Gong, et al.
0

Multi-task sparse feature learning aims to improve the generalization performance by exploiting the shared features among tasks. It has been successfully applied to many applications including computer vision and biomedical informatics. Most of the existing multi-task sparse feature learning algorithms are formulated as a convex sparse regularization problem, which is usually suboptimal, due to its looseness for approximating an ℓ_0-type regularizer. In this paper, we propose a non-convex formulation for multi-task sparse feature learning based on a novel non-convex regularizer. To solve the non-convex optimization problem, we propose a Multi-Stage Multi-Task Feature Learning (MSMTFL) algorithm; we also provide intuitive interpretations, detailed convergence and reproducibility analysis for the proposed algorithm. Moreover, we present a detailed theoretical analysis showing that MSMTFL achieves a better parameter estimation error bound than the convex formulation. Empirical studies on both synthetic and real-world data sets demonstrate the effectiveness of MSMTFL in comparison with the state of the art multi-task sparse feature learning algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2014

Multi-stage Multi-task feature learning via adaptive threshold

Multi-task feature learning aims to identity the shared features among t...
research
02/14/2017

Efficient Multi-task Feature and Relationship Learning

In this paper we propose a multi-convex framework for multi-task learnin...
research
09/23/2020

Rank-Based Multi-task Learning for Fair Regression

In this work, we develop a novel fairness learning approach for multi-ta...
research
04/13/2015

Learning Multiple Visual Tasks while Discovering their Structure

Multi-task learning is a natural approach for computer vision applicatio...
research
06/18/2019

Learning Personalized Attribute Preference via Multi-task AUC Optimization

Traditionally, most of the existing attribute learning methods are train...
research
05/09/2012

Multi-Task Feature Learning Via Efficient l2,1-Norm Minimization

The problem of joint feature selection across a group of related tasks h...
research
05/20/2018

Wasserstein regularization for sparse multi-task regression

Two important elements have driven recent innovation in the field of reg...

Please sign up or login with your details

Forgot password? Click here to reset