Training Vision-Language Models with Less Bimodal Supervision
Elad Segal, Ben Bogin, Jonathan Berant.
Standard practice in pertaining multimodal models, such as vision-language models, is to rely on pairs of aligned inputs from both modalities, for example, aligned image- text pairs. However, such pairs can be difficult to obtain in low-resource settings and for some modality pairs (e.g., structured tables and images). In this work, we investigate the extent to which we can reduce the reliance on such parallel data, which we term bimodal supervision, and use models that are pretrained on each modality independently. We experiment with a high-performing vision-language model, and analyze the effect of bimodal supervision on three vision-language tasks. We find that on simpler tasks, such as VQAv2 and GQA, one can eliminate bimodal supervision completely, suffering only a minor loss in performance. Conversely, for NLVR2, which requires more complex reasoning, training without bimodal supervision leads to random performance. Nevertheless, using only 5% of the bimodal data (142K images along with their captions), or leveraging weak supervision in the form of a list of machine-generated labels for each image, leads to only a moderate degradation compared to using 3M image-text pairs: 74%→∼70%.
Citation
@inproceedings{ segal2022training, title={Training Vision-Language Models with Less Bimodal Supervision}, author={Elad Segal and Ben Bogin and Jonathan Berant}, booktitle={4th Conference on Automated Knowledge Base Construction}, year={2022} }