Abstractified Multi-instance Learning (AMIL) for Biomedical Relation Extraction

William P HoganMolly HuangYannis KatsisTyler BaldwinHo-Cheol Kim, Yoshiki Baeza, Andrew BartkoChun-Nan Hsu.

doi:10.24432/C5V30P

TL;DR

We propose a new method that improves biomedical relationship extraction by leveraging ontological information.
Relation extraction in the biomedical domain is a challenging task due to a lack of labeled data and a long-tail distribution of fact triples. Many works leverage distant supervision which automatically generates labeled data by pairing a knowledge graph with raw textual data. Distant supervision produces noisy labels and requires additional techniques, such as multi-instance learning (MIL), to denoise the training signal. However, MIL requires multiple instances of data and struggles with very long-tail datasets such as those found in the biomedical domain. In this work, we propose a novel reformulation of MIL for biomedical relation extraction that abstractifies biomedical entities into their corresponding semantic types. By grouping entities by types, we are better able to take advantage of the benefits of MIL and further denoise the training signal. We show this reformulation, which we refer to as abstractified multi-instance learning (AMIL), improves performance in biomedical relationship extraction. We also propose a novel relationship embedding architecture that further improves model performance.

Citation

@inproceedings{
hogan2021abstractified,
title={Abstractified Multi-instance Learning ({AMIL}) for Biomedical Relation Extraction},
author={William P Hogan and Molly Huang and Yannis Katsis and Tyler Baldwin and Ho-Cheol Kim and Yoshiki Baeza and Andrew Bartko and Chun-Nan Hsu},
booktitle={3rd Conference on Automated Knowledge Base Construction},
year={2021},
url={https://openreview.net/forum?id=VX0swzJEzpg},
doi={10.24432/C5V30P}
}