Vision and language models (VL) are known to exploit unrobust indicators in individual modalities (e.g., introduced by distributional biases), instead of focusing on relevant information in each modality. A small drop in accuracy obtained on a VL task with a unimodal model suggests that so-called unimodal collapse occurred. But how to quantify the amount of unimodal collapse reliably, at dataset and instance-level, to diagnose and combat unimodal collapse in a targeted way? We present MM-SHAP, a performance-agnostic multimodality score that quantifies the proportion by which a model uses individual modalities in multimodal tasks. MM-SHAP is based on Shapley values and will be applied in two ways: (1) to compare models for their degree of multimodality, and (2) to measure the contribution of individual modalities for a given task and dataset. Experiments with 6 VL models -- LXMERT, CLIP and four ALBEF variants -- on four VL tasks highlight that unimodal collapse can occur to different degrees and in different directions, contradicting the wide-spread assumption that unimodal collapse is one-sided. We recommend MM-SHAP for analysing multimodal tasks, to diagnose and guide progress towards multimodal integration. Code available at: https://github.com/Heidelberg-NLP/MM-SHAP
translated by 谷歌翻译
我们提出Valse(视觉和语言结构化评估),这是一种新的基准,专为测试通用净化的视觉和语言(V&L)模型而设计,用于对特定语言现象的视野 - 语言接地能力。Valse提供涵盖各种语言构建体的六种测试套件。解决这些需要模型在视觉模型中地对语言现象,允许比迄今为止更细粒度的评估。我们使用支持有效箔的构造的方法构建Valse,并通过评估五种广泛使用的V&L模型的报告结果。我们的实验表明,目前的模型有很大的困难解决了大多数现象。因此,我们预计Valse就可以作为一种重要的基准,从语言角度来衡量预训过的V&L模型的未来进展,补充规范任务为中心的V&L评价。
translated by 谷歌翻译
大规模预制速度迅速成为视觉语言(VL)建模中的规范。然而,普遍的VL方法受标记数据的要求和复杂的多步预介质目标的要求受限。我们呈现Magma - 使用基于适配器的FineTuning使用额外的方式增强生成语言模型的简单方法。在冻结的情况下,我们培训一系列VL模型,从视觉和文本输入的任意组合自动生成文本。使用单一语言建模目的,预先预测完全结束于结束,与先前的方法相比,简化优化。重要的是,在培训期间,语言模型权重保持不变,允许从语言预磨练转移百科全书知识和内心的学习能力。 Magma在开放式生成任务上冻结的岩浆,实现了最先进的状态,结果在Okvqa基准和竞争结果上的一系列其他流行的VL基准测试中,同时预先训练用于培训SIMVLM的样本数量的0.2%。
translated by 谷歌翻译