帮助最终用户理解抽象分发的变化可以极大地促进AI部署。在此激励的情况下,我们提出了一项新颖的任务,数据集说明。给定两个图像数据集,数据集的说明旨在自然用自然语言指出其数据集级别的分布。当前用于监视分配变化的技术提供了不足的信息来了解数据集,以提高数据质量。因此,我们介绍了GSCLIP,这是一个无培训的框架来解决数据集说明任务。在GSCLIP中,我们将选择器作为第一种定量评估方法,以识别适当总结数据集偏移的解释。此外,我们利用该选择器来证明基于语言模型生成的发电机的优势。对自然数据转移的系统评估验证了GSCLIP(混合发电机组的组合系统和有效的选择器的组合系统不仅易于使用,而且对于数据集的说明也很强大。
translated by 谷歌翻译
Humans can classify an unseen category by reasoning on its language explanations. This ability is owing to the compositional nature of language: we can combine previously seen concepts to describe the new category. For example, we might describe mavens as "a kind of large birds with black feathers", so that others can use their knowledge of concepts "large birds" and "black feathers" to recognize a maven. Inspired by this observation, in this work we tackle zero-shot classification task by logically parsing and reasoning on natural language explanations. To this end, we propose the framework CLORE (Classification by LOgical Reasoning on Explanations). While previous methods usually regard textual information as implicit features, CLORE parses the explanations into logical structure the and then reasons along this structure on the input to produce a classification score. Experimental results on explanation-based zero-shot classification benchmarks demonstrate that CLORE is superior to baselines, mainly because it performs better on tasks requiring more logical reasoning. Alongside classification decisions, CLORE can provide the logical parsing and reasoning process as a form of rationale. Through empirical analysis we demonstrate that CLORE is also less affected by linguistic biases than baselines.
translated by 谷歌翻译
Vision-language models (VLMs) such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what features the model uses to construct its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline.
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
现有的解释模型仅生成建议的文本,但仍然难以生产各种内容。在本文中,为了进一步丰富解释,我们提出了一项名为“个性化展示”的新任务,其中我们同时提供文本和视觉信息来解释我们的建议。具体来说,我们首先选择一个个性化图像集,该图与用户对推荐物品的兴趣最相关。然后,自然语言解释将相应地产生我们的选定图像。对于这项新任务,我们从Google Local(即〜maps)收集一个大规模数据集,并构建一个用于生成多模式说明的高质量子集。我们提出了一个个性化的多模式框架,可以通过对比度学习产生多样化和视觉上的解释。实验表明,我们的框架受益于不同方式作为输入,并且与以前的各种评估指标相比,能够产生更多样化和表达的解释。
translated by 谷歌翻译
Temporal reasoning is the task of predicting temporal relations of event pairs with corresponding contexts. While some temporal reasoning models perform reasonably well on in-domain benchmarks, we have little idea of the systems' generalizability due to existing datasets' limitations. In this work, we introduce a novel task named TODAY that bridges this gap with temporal differential analysis, which as the name suggests, evaluates if systems can correctly understand the effect of incremental changes. Specifically, TODAY makes slight context changes for given event pairs, and systems need to tell how this subtle contextual change will affect temporal relation distributions. To facilitate learning, TODAY also annotates human explanations. We show that existing models, including GPT-3, drop to random guessing on TODAY, suggesting that they heavily rely on spurious information rather than proper reasoning for temporal predictions. On the other hand, we show that TODAY's supervision style and explanation annotations can be used in joint learning and encourage models to use more appropriate signals during training and outperform across several benchmarks. TODAY can also be used to train models to solicit incidental supervision from noisy sources such as GPT-3 and moves farther towards generic temporal reasoning systems.
translated by 谷歌翻译
Many visual recognition models are evaluated only on their classification accuracy, a metric for which they obtain strong performance. In this paper, we investigate whether computer vision models can also provide correct rationales for their predictions. We propose a ``doubly right'' object recognition benchmark, where the metric requires the model to simultaneously produce both the right labels as well as the right rationales. We find that state-of-the-art visual models, such as CLIP, often provide incorrect rationales for their categorical predictions. However, by transferring the rationales from language models into visual representations through a tailored dataset, we show that we can learn a ``why prompt,'' which adapts large visual representations to produce correct rationales. Visualizations and empirical experiments show that our prompts significantly improve performance on doubly right object recognition, in addition to zero-shot transfer to unseen tasks and datasets.
translated by 谷歌翻译
Explaining the black-box predictions of NLP models naturally and accurately is an important open problem in natural language generation. These free-text explanations are expected to contain sufficient and carefully-selected evidence to form supportive arguments for predictions. Due to the superior generative capacity of large pretrained language models, recent work built on prompt engineering enables explanation generation without specific training. However, explanation generated through single-pass prompting often lacks sufficiency and conciseness. To address this problem, we develop an information bottleneck method EIB to produce refined explanations that are sufficient and concise. Our approach regenerates the free-text explanation by polishing the single-pass output from the pretrained language model but retaining the information that supports the contents being explained. Experiments on two out-of-domain tasks verify the effectiveness of EIB through automatic evaluation and thoroughly-conducted human evaluation.
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
文本样式传输是自然语言生成中的重要任务,旨在控制生成的文本中的某些属性,例如礼貌,情感,幽默和许多其他特性。它在自然语言处理领域拥有悠久的历史,最近由于深神经模型带来的有希望的性能而重大关注。在本文中,我们对神经文本转移的研究进行了系统调查,自2017年首次神经文本转移工作以来跨越100多个代表文章。我们讨论了任务制定,现有数据集和子任务,评估,以及丰富的方法在存在并行和非平行数据存在下。我们还提供关于这项任务未来发展的各种重要主题的讨论。我们的策据纸张列表在https://github.com/zhijing-jin/text_style_transfer_survey
translated by 谷歌翻译
Winograd架构挑战 - 一套涉及代词参考消歧的双句话,似乎需要使用致辞知识 - 是由2011年的赫克托勒维克斯提出的。到2019年,基于大型预先训练的变压器的一些AI系统基于语言模型和微调这些问题,精度优于90%。在本文中,我们审查了Winograd架构挑战的历史并评估了其重要性。
translated by 谷歌翻译
与自然语言解释的视觉结合旨在推断文本图像对之间的关​​系并生成句子以解释决策过程。先前的方法主要依靠预先训练的视觉模型来执行关系推断和语言模型来生成相应的解释。但是,预训练的视觉模型主要在文本和图像之间建立令牌级别的对齐,但忽略了短语(块)和视觉内容之间的高级语义对齐,这对于视觉推理至关重要。此外,仅基于编码的联合表示形式的解释生成器并未明确考虑关键的关系推理的决策点。因此,产生的解释不太忠于视觉语言推理。为了减轻这些问题,我们提出了一种统一的块意见对齐和基于词汇约束的方法,称为CALEC。它包含一个块感知的语义交互器(ARR。CSI),一个关系属性和词汇约束感知的发生器(arr。Lecg)。具体而言,CSI利用语言和各个图像区域固有的句子结构来构建块感知语义对齐。关系下属使用基于注意力的推理网络来合并令牌级别和块级视觉语言表示。 LECG利用词汇约束来将关系下列者重点关注的单词或块纳入解释世代,从而提高了解释的忠诚和信息性。我们在三个数据集上进行了广泛的实验,实验结果表明,CALEC在推理准确性和生成的解释的质量方面显着优于其他竞争者模型。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
开放词汇模型是图像分类的有希望的新范式。与传统的分类模型不同,开放词汇模型在推理过程中用自然语言指定的任何任意类别中分类。这种称为“提示”的自然语言通常由一组手写的模板(例如,“ {}”的照片)组成,这些模板与每个类别名称完成。这项工作引入了一种简单的方法,可以生成更高的准确性提示,而无需对图像域的明确知识和更少的手工构造句子。为了实现这一目标,我们将开放式词汇模型与大语言模型(LLMS)相结合,以通过语言模型(Cupl,发音为“夫妇”)创建自定义提示。特别是,我们利用LLMS中包含的知识来生成许多针对每个对象类别定制的描述性句子。我们发现,这种直接和一般的方法可提高一系列零照片分类基准的准确性,包括ImageNet上超过一个百分比的增益。最后,此方法不需要额外的培训,并且仍然完全零射。代码可在https://github.com/sarahpratt/cupl上找到。
translated by 谷歌翻译
从自然语言监督中学习视觉表示,最近在许多开创性的作品中表现出了巨大的希望。通常,这些具有语言的视觉模型表现出对各种数据集和任务的强大可传递性。但是,由于缺乏易于使用的评估工具包和公共基准,评估这些模型的可转让性仍然很具有挑战性。为了解决这个问题,我们构建了高级版(评估语言的视觉任务级传输),这是用于评估(预训练)语言增强视觉模型的第一个基准和工具包。升华由三个组成部分组成。 (i)数据集。作为下游评估套件,它由20个图像分类数据集和35个对象检测数据集组成,每个数据集都用外部知识来增强。 (ii)工具包。开发了自动高参数调谐工具包,以促进下游任务的模型评估。 (iii)指标。多种评估指标用于测量样品效率(零射击和少量)和参数效率(线性探测和完整模型微调)。我们在https://computer-vision-in-the-wild.github.io/elevater/上公开发布leverater
translated by 谷歌翻译
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing DNNs become more complex and diverse, ranging from improving a conventional model accuracy metric to infusing advanced human virtues such as fairness, accountability, transparency (FaccT), and unbiasedness. Recently, techniques in Explainable Artificial Intelligence (XAI) are attracting considerable attention, and have tremendously helped Machine Learning (ML) engineers in understanding AI models. However, at the same time, we started to witness the emerging need beyond XAI among AI communities; based on the insights learned from XAI, how can we better empower ML engineers in steering their DNNs so that the model's reasonableness and performance can be improved as intended? This article provides a timely and extensive literature overview of the field Explanation-Guided Learning (EGL), a domain of techniques that steer the DNNs' reasoning process by adding regularization, supervision, or intervention on model explanations. In doing so, we first provide a formal definition of EGL and its general learning paradigm. Secondly, an overview of the key factors for EGL evaluation, as well as summarization and categorization of existing evaluation procedures and metrics for EGL are provided. Finally, the current and potential future application areas and directions of EGL are discussed, and an extensive experimental study is presented aiming at providing comprehensive comparative studies among existing EGL models in various popular application domains, such as Computer Vision (CV) and Natural Language Processing (NLP) domains.
translated by 谷歌翻译
最近的文本到图像匹配模型对大型图像和句子的大公司进行了对比学习。虽然这些模型可以提供用于匹配和随后的零拍任务的强大分数,但它们不能给出给定图像的标题。在这项工作中,我们重新利用这些模型来生成在推理时间的图像时生成描述性文本,而无需进一步的训练或调整步骤。这是通过将具有大语言模型的视觉语义模型组合,从两种网络级模型中的知识中获益。由受监督标题方法获得的标题的限制性较小。此外,作为零射击学习方法,它非常灵活,我们展示了执行图像算法的能力,其中输入可以是图像或文本,输出是句子。这使得新颖的高级视觉能力,例如比较两个图像或解决视觉类比测试。
translated by 谷歌翻译
现有的图像到图像翻译技术通常遭受了两个关键问题:严重依赖按样本域注释和/或无法处理每个图像的多个属性。最近的方法采用聚类方法来轻松以无监督的方式提供样本注释。但是,他们无法解释现实环境。一个样本可能具有多个属性。此外,集群的语义不容易与人类的理解相结合。为了克服这些,我们提出了一种语言驱动的图像到图像翻译模型,称为LANIT。我们利用文本中给出的易于访问的候选域注释,并在培训期间共同优化它们。目标样式是通过根据多热域分配汇总多域样式向量来指定的。由于最初的候选域文本可能不准确,因此我们将候选域文本设置为可学习的,并在培训期间共同对其进行微调。此外,我们引入了一个松弛域,以涵盖候选域未覆盖的样品。对几个标准基准测试的实验表明,LANIT与现有模型具有可比性或优越的性能。
translated by 谷歌翻译
在过去的十年中,电子商务的自动产品描述生成已经取得了重大进步。产品文案旨在通过通过文本描述突出产品特征来吸引用户的兴趣并改善用户体验。随着电子商务平台提供的服务变得多样化,有必要动态地调整自动生成描述的模式。在本文中,我们将基于电子商务前缀的可控文案生成(EPCCG)系统部署到JD.com电子商务产品推荐平台中的经验。系统的开发包含两个主要组成部分:1)文案写作方面提取; 2)弱监督的方面标签; 3)具有基于前缀的语言模型的文本生成; 4)文案写作质量控制。我们进行实验以验证拟议的EPCCG的有效性。此外,我们将与EPCCG合作的已部署架构介绍到实时JD.com电子商务推荐平台以及部署以来的巨大回报。
translated by 谷歌翻译