A Study of Zero-shot Adaptation with Commonsense Knowledge
Jiarui Zhang, Filip Ilievski, Kaixin Ma, Jonathan Francis, Alessandro Oltramari.
Self-supervision with synthetic training data built from knowledge graphs has been proven useful to enhance the language model accuracy in zero-shot evaluation on commonsense reasoning tasks. Yet, since these improvements are reported in aggregate, little is known about how to select the appropriate knowledge for generalizable performance across tasks, how to combine this knowledge with neural language models, and how these pairings affect granular task performance. In this paper, we study the sensitivity of language models to knowledge sampling strategies, modeling architecture choices, and task properties. We evaluate the accuracy overall and in relation to four task properties: domain and vocabulary overlap between the train and the test data, answer similarity, and answer length. Our experiments show that: (i) encoder-decoder models benefit from more data to learn from, (ii) sampling strategies that balance across different aspects or focus on knowledge dimensions yield best accuracy, (iii) synthetic data is most effective for tasks with low domain overlap, and questions with short answers and dissimilar answer candidates, and (iv) our best T5 model reaches state-of-the-art results on zero-shot commonsense reasoning, narrowing the gap with supervised models, which is a side effect of our overall study.
Citation
@inproceedings{ zhang2022a, title={A Study of Zero-shot Adaptation with Commonsense Knowledge}, author={Jiarui Zhang and Filip Ilievski and Kaixin Ma and Jonathan Francis and Alessandro Oltramari}, booktitle={4th Conference on Automated Knowledge Base Construction}, year={2022} }