Cross-modal subspace learning with Kernel correlation maximization and Discriminative structure preserving

03/26/2019
by   Jun Yu, et al.
0

The measure between heterogeneous data is still an open problem. Many research works have been developed to learn a common subspace where the similarity between different modalities can be calculated. However, most of existing works focus on learning low dimensional subspace and ignore the loss of discriminative information in process of reducing dimension. Thus, these approaches cannot get the results they expected. On basis of the Hilbert space theory in which different Hilbert spaces but with same dimension are isomorphic, we propose a novel framework where the multiple use of label information can facilitate more discriminative subspace representation to learn isomorphic Hilbert space for each modal. Our model not only considers the inter-modality correlation by maximizing the kernel correlation, but also preserves the structure information within each modal according to constructed graph model. Extensive experiments are performed to evaluate the proposed framework, termed Cross-modal subspace learning with Kernel correlation maximization and Discriminative structure preserving (CKD), on the three public datasets. Experimental results demonstrated the competitive performance of the proposed CKD compared with the classic subspace learning methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset