On Evaluating Embedding Models for Knowledge Base Completion
Knowledge bases contribute to many artificial intelligence tasks, yet they are often incomplete. To add missing facts to a given knowledge base, various embedding models have been proposed in the recent literature. Perhaps surprisingly, relatively simple models with limited expressiveness often performed remarkably well under today's most commonly used evaluation protocols. In this paper, we explore whether recent embedding models work well for knowledge base completion tasks and argue that the current evaluation protocols are more suited for question answering rather than knowledge base completion. We show that using an alternative evaluation protocol more suitable for knowledge base completion, the performance of all models is unsatisfactory. This indicates the need for more research into embedding models and evaluation protocols for knowledge base completion.
READ FULL TEXT