Workshops
Knowledge Bases and Multiple Modalities
Recently, there has been growing interest in combining knowledge bases and multiple modalities such as NLP, vision and speech. These combinations have resulted in improvements to various downstream tasks including question answering, image classification, object detection, and link prediction. The objectives of the KBMM workshop is to bring together researchers interested in (a) combining knowledge bases with other modalities to showcase more effective downstream tasks, (b) improving completion and construction of knowledge bases from multiple modalities, and in general, to share state-of-the-art approaches, best practices, and future directions.
Structured and Unstructured KBs
Knowledge bases have been a key part of AI research and applications for decades. Traditionally, KBs were purely symbolic but more recently deep learning advances have raised the possibility of having purely unstructured/implicit neural KBs.
The objectives of this workshop is to bring together researchers exploring different paradigms of extracting, representing and applying knowledge. We will discuss knowledge spanning across Structured (tables, graphs, etc) to Unstructured (plain text), and Explicit (symbolic) to Implicit (parameterized).
Natural Language Processing and Data Mining for Scientific Text
The primary goal of this half-day workshop is to bring together researchers from diverse fields who are interested in extracting and representing knowledge from scientific text, and/or applications or methods for improving access to and understanding of such knowledge. Please see https://scinlp.org for schedule, accepted posters, and other details.
Bias in Automatic Knowledge Graph Construction
Knowledge Graphs (KGs) store human knowledge about the world in structured format, e.g., triples of facts or graphs of entities and relations, to be processed by AI systems. In the past decade, extensive research efforts have gone into constructing and utilizing knowledge graphs for tasks in natural language processing, information retrieval, recommender systems, and more. Once constructed, knowledge graphs are often considered as “gold standard” data sources that safeguard the correctness of other systems. Because the biases inherent to KGs may become magnified and spread through such systems, it is crucial that we acknowledge and address various types of bias in knowledge graph construction.
Such biases may originate in the very design of the KG, in the source data from which it is created (semi-)automatically, and in the algorithms used to sample, aggregate, and process that data. Causes of bias include systematic errors due to selecting non-random items (selection bias), misremembering certain events (recall bias), and interpreting facts in a way that affirms individuals’ preconceptions (confirmation bias). Biases typically appear subliminally in expressions, utterances, and text in general and can carry over into downstream representations such as embeddings and knowledge graphs.
This workshop – to be held for the first time at AKBC 2020 – addresses the questions: “how do such biases originate?”, “How do we identify them?”, and “What is the appropriate way to handle them, if at all?”. This topic is as-yet unexplored and the goal of our workshop is to start a meaningful, long-lasting dialogue spanning researchers across a wide variety of backgrounds and communities.