Answering Visual-Relational Queries in Web-Extracted Knowledge Graphs

Daniel Oñoro-RubioMathias NiepertAlberto García-Durán, Roberto González-Sánchez, Roberto J. López-Sastre.

doi:10.24432/C56P45

TL;DR

A visual-relational knowledge graph (KG) is a multi-relational graph whose entities are associated with images. We explore novel machine learning approaches for answering visual-relational queries in web-extracted knowledge graphs. To this end, we have created ImageGraph, a KG with 1,330 relation types, 14,870 entities, and 829,931 images crawled from the web. With visual-relational KGs such as ImageGraph one can introduce novel probabilistic query types in which images are treated as first-class citizens. Both the prediction of relations between unseen images as well as multi-relational image retrieval can be expressed with specific families of visual-relational queries. We introduce novel combinations of convolutional networks and knowledge graph embedding methods to answer such queries. We also explore a zero-shot learning scenario where an image of an entirely new entity is linked with multiple relations to entities of an existing KG. The resulting multi-relational grounding of unseen entity images into a knowledge graph serves as a semantic entity representation. We conduct experiments to demonstrate that the proposed methods can answer these visual-relational queries efficiently and accurately.

Citation

@inproceedings{
o{\~n}oro-rubio2019answering,
title={Answering Visual-Relational Queries in Web-Extracted Knowledge Graphs},
author={Daniel O{\~n}oro-Rubio and Mathias Niepert and Alberto Garc{\'\i}a-Dur{\'a}n and Roberto Gonz{\'a}lez-S{\'a}nchez and Roberto J. L{\'o}pez-Sastre},
booktitle={Automated Knowledge Base Construction (AKBC)},
year={2019},
url={https://openreview.net/forum?id=BylEpe9ppX},
doi={10.24432/C56P45}
}
Gold Sponsors
Silver Sponsors
Bronze Sponsors
Chan Zuckerberg Initiative Facebook Google
Diffbot Oracle Corporation NEC
Elsevier Kenome