Large-scale diffusion models have achieved state-of-the-art results on text-to-image synthesis (T2I) tasks. Despite their ability to generate high-quality yet creative images, we observe that attribution-binding and compositional capabilities are still considered major challenging issues, especially when involving multiple objects. In this work, we improve the compositional skills of T2I models, specifically more accurate attribute binding and better image compositions. To do this, we incorporate linguistic structures with the diffusion guidance process based on the controllable properties of manipulating cross-attention layers in diffusion-based T2I models. We observe that keys and values in cross-attention layers have strong semantic meanings associated with object layouts and content. Therefore, we can better preserve the compositional semantics in the generated image by manipulating the cross-attention representations based on linguistic insights. Built upon Stable Diffusion, a SOTA T2I model, our structured cross-attention design is efficient that requires no additional training samples. We achieve better compositional skills in qualitative and quantitative results, leading to a 5-8% advantage in head-to-head user comparison studies. Lastly, we conduct an in-depth analysis to reveal potential causes of incorrect image compositions and justify the properties of cross-attention layers in the generation process.
translated by 谷歌翻译
Federated embodied agent learning protects the data privacy of individual visual environments by keeping data locally at each client (the individual environment) during training. However, since the local data is inaccessible to the server under federated learning, attackers may easily poison the training data of the local client to build a backdoor in the agent without notice. Deploying such an agent raises the risk of potential harm to humans, as the attackers may easily navigate and control the agent as they wish via the backdoor. Towards Byzantine-robust federated embodied agent learning, in this paper, we study the attack and defense for the task of vision-and-language navigation (VLN), where the agent is required to follow natural language instructions to navigate indoor environments. First, we introduce a simple but effective attack strategy, Navigation as Wish (NAW), in which the malicious client manipulates local trajectory data to implant a backdoor into the global model. Results on two VLN datasets (R2R and RxR) show that NAW can easily navigate the deployed VLN agent regardless of the language instruction, without affecting its performance on normal test sets. Then, we propose a new Prompt-Based Aggregation (PBA) to defend against the NAW attack in federated VLN, which provides the server with a ''prompt'' of the vision-and-language alignment variance between the benign and malicious clients so that they can be distinguished during training. We validate the effectiveness of the PBA method on protecting the global model from the NAW attack, which outperforms other state-of-the-art defense methods by a large margin in the defense metrics on R2R and RxR.
translated by 谷歌翻译
Prompt tuning is a new few-shot transfer learning technique that only tunes the learnable prompt for pre-trained vision and language models such as CLIP. However, existing prompt tuning methods tend to learn spurious or entangled representations, which leads to poor generalization to unseen concepts. Towards non-spurious and efficient prompt learning from limited examples, this paper presents a novel \underline{\textbf{C}}ounterfactual \underline{\textbf{P}}rompt \underline{\textbf{L}}earning (CPL) method for vision and language models, which simultaneously employs counterfactual generation and contrastive learning in a joint optimization framework. Particularly, CPL constructs counterfactual by identifying minimal non-spurious feature change between semantically-similar positive and negative samples that causes concept change, and learns more generalizable prompt representation from both factual and counterfactual examples via contrastive learning. Extensive experiments demonstrate that CPL can obtain superior few-shot performance on different vision and language tasks than previous prompt tuning methods on CLIP. On image classification, we achieve 3.55\% average relative improvement on unseen classes across seven datasets; on image-text retrieval and visual question answering, we gain up to 4.09\% and 25.08\% relative improvements across three few-shot scenarios on unseen test sets respectively.
translated by 谷歌翻译
视觉导航要求代理商遵循自然语言说明以达到特定目标。可见的环境和看不见的环境之间的巨大差异使代理商概括良好的挑战。先前的研究提出了数据增强方法,以明确或隐式地减轻数据偏见并提供概括的改进。但是,他们试图记住增强的轨迹,并在测试时忽略在看不见的环境下的分布变化。在本文中,我们提出了一个看不见的差异,预期视力和语言导航(戴维斯),该差异通过鼓励测试时间的视觉一致性来概括为看不见的环境。具体来说,我们设计了:1)半监督框架戴维斯(Davis),该框架利用类似的语义观测来利用视觉一致性信号。 2)一个两阶段的学习程序,鼓励适应测试时间分布。该框架增强了模仿和强化学习的基本混合物与动量形成对比,以鼓励在联合训练阶段和测试时间适应阶段对类似观察的稳定决策。广泛的实验表明,戴维斯在R2R和RXR基准上实现了与先前最先进的VLN基线相比,取得了模型不合命源性的改进。我们的源代码和数据是补充材料。
translated by 谷歌翻译
建立一个对话体现的代理执行现实生活任务一直是一个长期而又具有挑战性的研究目标,因为它需要有效的人类代理沟通,多模式理解,远程顺序决策等。传统的符号方法具有扩展和概括问题,而端到端的深度学习模型则遭受数据稀缺和高任务复杂性的影响,并且通常很难解释。为了从两全其美的世界中受益,我们提出了一个神经符号常识性推理(JARVIS)框架,用于模块化,可推广和可解释的对话体现的药物。首先,它通过提示大型语言模型(LLM)来获得符号表示,以了解语言理解和次目标计划,并通过从视觉观察中构建语义图。然后,基于任务和动作级别的常识,次目标计划和行动生成的符号模块。在Teach数据集上进行的大量实验验证了我们的JARVIS框架的功效和效率,该框架在所有三个基于对话框的具体任务上实现了最新的(SOTA)结果,包括对话记录(EDH)的执行,对话框的轨迹, (TFD)和两个代理任务完成(TATC)(例如,我们的方法将EDH看不见的成功率从6.1 \%\%提高到15.8 \%)。此外,我们系统地分析了影响任务绩效的基本因素,并在几个射击设置中证明了我们方法的优越性。我们的Jarvis模型在Alexa奖Simbot公共基准挑战赛中排名第一。
translated by 谷歌翻译
有效的视觉在延迟预算下的精度最大化。这些作品一次评估脱机准确性,一次是一张图像。但是,诸如自动驾驶之类的实时视觉应用在流媒体设置中运行,在这些设置中,地面真相在推理开始和终点之间会发生变化。这会导致明显的准确性下降。因此,最近提出的一项旨在最大程度地提高流媒体设置准确性的工作。在本文中,我们建议在每个环境环境中最大化流的准确性。我们认为场景难度会影响初始(离线)精度差异,而场景中的障碍物位移会影响后续的准确性降解。我们的方法章鱼使用这些方案属性来选择在测试时最大化流量准确性的配置。我们的方法将跟踪性能(S-MOTA)提高了7.4%,而常规静态方法则提高了。此外,使用我们的方法提高性能,而不是离线准确性的进步,而不是代替而不是进步。
translated by 谷歌翻译
文献中已经提出了各种公平限制,以减轻小组级统计偏见。它们的影响已在很大程度上评估了与一组敏感属性(例如种族或性别)相对应的不同人群。尽管如此,社区尚未观察到足够的探索,以实例限制公平的限制。基于影响功能的概念,该措施表征了训练示例对目标模型及其预测性能的影响,这项工作研究了施加公平性约束时训练示例的影响。我们发现,在某些假设下,关于公平限制的影响功能可以分解为训练示例的内核组合。提出的公平影响功能的一种有希望的应用是确定可疑的训练示例,这些训练示例可能通过对其影响得分进行排名来导致模型歧视。我们通过广泛的实验证明,对一部分重量数据示例进行培训会导致违反公平性的侵犯,而准确性的权衡。
translated by 谷歌翻译
从语言灵活性和组成性中受益,人类自然打算使用语言来指挥体现的代理,以进行复杂的任务,例如导航和对象操纵。在这项工作中,我们旨在填补最后一英里的体现代理的空白 - 通过遵循人类的指导,例如,“将红杯子移到盒子旁边,同时将其保持直立。”为此,我们介绍了一个自动操纵求解器(AMSolver)模拟器,并基于IT构建视觉和语言操纵基准(VLMBENCH),其中包含有关机器人操纵任务的各种语言说明。具体而言,创建基于模块化规则的任务模板是为了自动生成具有语言指令的机器人演示,包括各种对象形状和外观,动作类型和运动约束。我们还开发了一个基于关键点的模型6D-Cliport,以处理多视图观察和语言输入,并输出一个6个自由度(DOF)动作的顺序。我们希望新的模拟器和基准将促进对语言引导机器人操纵的未来研究。
translated by 谷歌翻译
语言规划旨在通过分解为更简单的低级步骤来实现复杂的高级目标。这种程序推理能力对于诸如家用机器人和虚拟助手等应用至关重要。尽管语言规划是日常生活中人类的基本技能,但对于缺乏现实世界中缺乏深层常识性知识的大型语言模型(LLM)来说,这仍然是一个挑战。以前的方法需要手动示例或带注释的程序才能从LLM中获取此类能力。相比之下,本文提出了神经符号的因果语言规划师(CLAP),该策划者通过注入常识的提示从LLM中引起了程序知识。 LLMS中的预训练知识本质上是一种未观察到的混杂因素,它在任务和行动计划之间引起虚假的相关性。通过结构性因果模型(SCM)的镜头,我们提出了一个有效的策略,以构建提示作为对SCM的因果干预。我们的策略使用图形采样技术和符号程序执行者,正式从常识知识基础上形成结构化因果提示。拍手在Wikihow和机器人上获得最新的表现,在反事实环境下,人类评估的相对提高了5.28%。这表明在语义和顺序的因果语言规划中拍手的优势。
translated by 谷歌翻译
In computer vision, it has achieved great transfer learning performance via adapting large-scale pretrained vision models (e.g., vision transformers) to downstream tasks. Common approaches for model adaptation either update all model parameters or leverage linear probes. In this paper, we aim to study parameter-efficient model adaptation strategies for vision transformers on the image classification task. We formulate efficient model adaptation as a subspace training problem and perform a comprehensive benchmarking over different efficient adaptation methods. We conduct an empirical study on each efficient model adaptation method focusing on its performance alongside parameter cost. Furthermore, we propose a parameter-efficient model adaptation framework, which first selects submodules by measuring local intrinsic dimensions and then projects them into subspace for further decomposition via a novel Kronecker Adaptation (KAdaptation) method. We analyze and compare our method with a diverse set of baseline model adaptation methods (including state-of-the-art methods for pretrained language models). Our method performs the best in terms of the tradeoff between accuracy and parameter efficiency across 20 image classification datasets under the few-shot setting and 7 image classification datasets under the full-shot setting.
translated by 谷歌翻译