Towards Robust Knowledge Graph Embedding via Multi-task Reinforcement Learning
Nowadays, Knowledge graphs (KGs) have been playing a pivotal role in AI-related applications. Despite the large sizes, existing KGs are far from complete and comprehensive. In order to continuously enrich KGs, automatic knowledge construction and update mechanisms are usually utilized, which inevitably bring in plenty of noise. However, most existing knowledge graph embedding (KGE) methods assume that all the triple facts in KGs are correct, and project both entities and relations into a low-dimensional space without considering noise and knowledge conflicts. This will lead to low-quality and unreliable representations of KGs. To this end, in this paper, we propose a general multi-task reinforcement learning framework, which can greatly alleviate the noisy data problem. In our framework, we exploit reinforcement learning for choosing high-quality knowledge triples while filtering out the noisy ones. Also, in order to take full advantage of the correlations among semantically similar relations, the triple selection processes of similar relations are trained in a collective way with multi-task learning. Moreover, we extend popular KGE models TransE, DistMult, ConvE and RotatE with the proposed framework. Finally, the experimental validation shows that our approach is able to enhance existing KGE models and can provide more robust representations of KGs in noisy scenarios.
READ FULL TEXT