Ranking vs. Classifying: Measuring Knowledge Base Completion Quality

Marina SperanskayaMartin SchmittBenjamin Roth.

doi:10.24432/C57G65

TL;DR

We publish a new evaluation benchmark for knowledge graph completion methods where ranking is replaced with actual classification, and show one way to improve knowledge graph embedding models in this new setting.
Knowledge base completion (KBC) methods aim at inferring missing facts from the information present in a knowledge base (KB). Such a method thus needs to estimate the likelihood of candidate facts and ultimately to distinguish between true facts and false ones to avoid compromising the KB with untrue information. In the prevailing evaluation paradigm, however, models do not actually decide whether a new fact should be accepted or not but are solely judged on the position of true facts in a likelihood ranking with other candidates. We argue that consideration of binary predictions is essential to reflect the actual KBC quality, and propose a novel evaluation paradigm, designed to provide more transparent model selection criteria for a realistic scenario. We construct the data set FB14k-QAQ with an alternative evaluation data structure: instead of single facts, we use KB queries, i.e., facts where one entity is replaced with a variable, and construct corresponding sets of entities that are correct answers. We randomly remove some of these correct answers from the data set, simulating the realistic scenario of real-world entities missing from a KB. This way, we can explicitly measure a model’s ability to handle queries that have more correct answers in the real world than in the KB, including the special case of queries without any valid answer. The latter especially contrasts the ranking setting. We evaluate a number of state-of-the-art KB embeddings models on our new benchmark. The differences in relative performance between ranking-based and classification-based evaluation that we observe in our experiments confirm our hypothesis that good performance on the ranking task does not necessarily translate to good performance on the actual completion task. Our results motivate future work on KB embedding models with better prediction separability and, as a first step in that direction, we propose a simple variant of TransE that encourages thresholding and achieves a significant improvement in classification F 1 score relative to the original TransE.

Citation

@inproceedings{
speranskaya2020ranking,
title={Ranking vs. Classifying: Measuring Knowledge Base Completion Quality},
author={Marina Speranskaya and Martin Schmitt and Benjamin Roth},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=3pcecaCEK-},
doi={10.24432/C57G65}
}

Sponsors