如果要成功部署在高风险现实世界应用程序(例如自动驾驶汽车)中,则深层网络应对罕见事件具有强大的核心。在这里,我们研究了深网识别异常姿势对象的能力。我们创建了一个在异常方向上的对象图像的合成数据集,并评估了38个最新且竞争性深网的鲁棒性,用于图像分类。我们表明,对所有测试的网络进行分类仍然是一个挑战,与直立物体显示时,平均准确度下降了29.5%。这种脆弱性在很大程度上不受各种网络设计选择的影响,例如培训损失(例如,有监督与自我监督),架构(例如,卷积网络与变形金刚),数据集模式(例如,图像与图像 - text对) ,以及数据授权方案。但是,在非常大的数据集上训练的网络基本上要优于其他培训,最好的网络测试了$ \ unicode {x2014} $ noisy学生Efficentnet-L2接受了JFT-300m $ \ unicode {x2014} $的训练,只有相对较小的准确率,仅准确14.5百分比不寻常的姿势。然而,对嘈杂学生的失败的视觉检查表明,与人类视觉系统的稳定性存在剩余差距。此外,结合多个对象转换$ \ unicode {x2014} $ 3D旋转并缩放$ \ unicode {x2014} $进一步降低了所有网络的性能。总的来说,我们的结果提供了对深网的鲁棒性的另一种衡量,在现实世界中使用它们时要考虑的重要性很重要。代码和数据集可在https://github.com/amro-kamal/objectpose上找到。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
Modern deep neural networks tend to be evaluated on static test sets. One shortcoming of this is the fact that these deep neural networks cannot be easily evaluated for robustness issues with respect to specific scene variations. For example, it is hard to study the robustness of these networks to variations of object scale, object pose, scene lighting and 3D occlusions. The main reason is that collecting real datasets with fine-grained naturalistic variations of sufficient scale can be extremely time-consuming and expensive. In this work, we present Counterfactual Simulation Testing, a counterfactual framework that allows us to study the robustness of neural networks with respect to some of these naturalistic variations by building realistic synthetic scenes that allow us to ask counterfactual questions to the models, ultimately providing answers to questions such as "Would your classification still be correct if the object were viewed from the top?" or "Would your classification still be correct if the object were partially occluded by another object?". Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers, with respect to these naturalistic variations. We find evidence that ConvNext is more robust to pose and scale variations than Swin, that ConvNext generalizes better to our simulated domain and that Swin handles partial occlusion better than ConvNext. We also find that robustness for all networks improves with network scale and with data scale and variety. We release the Naturalistic Variation Object Dataset (NVD), a large simulated dataset of 272k images of everyday objects with naturalistic variations such as object pose, scale, viewpoint, lighting and occlusions. Project page: https://counterfactualsimulation.github.io
translated by 谷歌翻译
对比训练有素的语言图像模型,例如剪辑,Align和Basic,已经证明了对多种具有挑战性的自然分配变化的前所未有的鲁棒性。由于这些语言图像模型与以前的培训方法有多种不同,因此一个重要的问题是导致稳定性增长的原因。我们通过系统的实验研究回答这个问题。具体而言,我们研究了鲁棒性增长的五个不同可能的原因:(i)训练集大小,(ii)培训分配,(iii)在培训时进行语言监督,(iv)测试时语言监督,以及(v)对比损失函数。我们的实验表明,更多样化的训练分布是稳健性增长的主要原因,其他因素几乎没有稳健性。除了实验结果之外,我们还引入了Imagenet捕获,这是一种来自Flickr的原始文本注释的Imagenet版本,以实现语言图像训练的进一步受控实验。
translated by 谷歌翻译
自动视觉解对我们多样化和开放的世界需要计算机视觉模型,以概括为特定任务的最小定制,类似于人类视力。计算机视觉基础型号培训,培训多样化,大型数据集,可以适应各种下游任务,对该任务来解决现实世界计算机视觉应用而言至关重要。虽然现有的视觉基础模型如剪辑,对齐和吴道2.0主要集中在映射图像和文本表示到跨模型共享表示,我们介绍了一台新的计算机视觉基础模型,佛罗伦萨,扩大粗糙的表示(现场)到精细(对象),从静态(图像)到动态(视频),以及从RGB到多个模态(标题,深度)。通过从Web级图像文本数据中纳入通用视觉语言表示,我们的佛罗伦萨模型可以很容易地适应各种计算机视觉任务,例如分类,检索,对象检测,VQA,图像标题,视频检索和动作识别。此外,佛罗伦萨在许多类型的转移学习中表现出出色的表现:全面采样的微调,线性探测,几次射击传输和用于新颖图像和物体的零拍摄传输。所有这些属性对于我们的视觉基础模型至关重要,以提供通用视觉任务。佛罗伦萨实现了新的最先进的导致44个代表性基准,例如Imagenet-1K零射击分类,最高1精度为83.74,最高5个精度为97.18,62.4地图上的Coco微调, 80.36在VQA上,动力学-600上的87.8。
translated by 谷歌翻译
本文提出了一种对比调整,这是一种简单的方法,采用对比训练来对准图像和文本模型,同时仍然利用他们的预训练。在我们的实证研究中,我们发现,锁定的预训练图像模型与解锁文本模型最佳。我们调用这种对比调整“锁定图像文本调整”(LIT TOONING)的实例,该实例仅教导文本模型,从预先训练的图像模型中读出了良好的表示新任务。亮度调谐模型将零拍摄传输到新视觉任务的能力提高,例如图像分类或检索。建议的亮度调整是广泛适用的;它可以使用三种不同的图像文本数据集可靠地使用多种预训练方法(监督和无监督)和多种架构(Reset,Vision变换器和MLP-MILLER)。利用基于变压器的预训练VIT-G / 14型号,LIT调谐模型在想象网测试集中实现了84.5%的零射频传输精度,并且在充满挑战的分发ObjectNet测试集中实现了81.1%。
translated by 谷歌翻译
我们提出了一种称为基本的组合缩放方法,可在ImageNet ILSVRC-2012验证集上实现85.7%的前1个零点精度,超越了最佳发布的零拍模型 - 剪辑并对齐 - 达9.3%。我们的基本模式还显示出鲁棒性基准的显着改进。例如,在5个测试集中,具有自然分布换档,如想象的 - {A,R,V2,素描}和ObjectNet,我们的车型实现了83.7%的前1个平均精度,只有一个小幅度从其原始的想象精度下降。为实现这些结果,我们扩大了剪辑的对比学习框架,并在三个方面对齐:数据大小,型号大小和批量大小。我们的数据集具有6.6B噪声图像文本对,比对齐的4倍,比夹子大16倍。我们最大的型号具有3B重量,参数比为3.75倍,拖鞋比对齐和夹子更大。我们的批量尺寸为65536,比剪辑的2倍,4倍超过对齐。缩放的主要挑战是我们的加速器的内存有限,如GPU和TPU。因此,我们提出了一种在线渐变缓存的简单方法来克服这个限制。
translated by 谷歌翻译
最近的工作表明,自我监督的预训练导致对挑战性视觉识别任务的监督学习改进。剪辑是一种令人兴奋的学习语言监督的新方法,展示了各种基准的有希望的表现。在这项工作中,我们探索自我监督的学习是否可以帮助使用语言监督来进行视觉表现学习。我们介绍了一个用于组合自我监督学习和剪辑预训练的多任务学习框架。在使用视觉变形金刚进行预培训之后,我们在三个不同的设置下彻底评估了代表性质量,并将性能与自我监督学习进行了比较:零拍摄传输,线性分类和端到端的FineTuning。在ImageNet和电池的额外数据集中,我们发现SLIP通过大幅度提高了精度。我们将通过关于不同模型大小,培训计划和预训练预训练数据集的实验进行验证。我们的研究结果表明,滑块享有世界上最好的:性能比自我监督更好(+ 8.1%的线性精度)和语言监督(+ 5.2%的零射精精度)。
translated by 谷歌翻译
Recent large-scale image generation models such as Stable Diffusion have exhibited an impressive ability to generate fairly realistic images starting from a very simple text prompt. Could such models render real images obsolete for training image prediction models? In this paper, we answer part of this provocative question by questioning the need for real images when training models for ImageNet classification. More precisely, provided only with the class names that have been used to build the dataset, we explore the ability of Stable Diffusion to generate synthetic clones of ImageNet and measure how useful they are for training classification models from scratch. We show that with minimal and class-agnostic prompt engineering those ImageNet clones we denote as ImageNet-SD are able to close a large part of the gap between models produced by synthetic images and models trained with real images for the several standard classification benchmarks that we consider in this study. More importantly, we show that models trained on synthetic images exhibit strong generalization properties and perform on par with models trained on real data.
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
生物视觉系统在没有监督的情况下学习视觉表示的能力是无与伦比的。在机器学习中,对比度学习(CL)已导致以无监督的方式形成对象表示。这些系统学习了对图像的增强操作不变的表示,例如裁剪或翻转。相反,生物视觉系统利用视觉体验的时间结构。这可以访问CL中常用的不常见的增强,例如从多个观点或不同背景观看相同的对象。在这里,我们系统地研究并比较了此类基于时间的增强对对象类别的潜在好处。我们的结果表明,基于时间的增强功能超过了最先进的图像增强功能。具体而言,我们的分析表明:1)3-D对象旋转极大地改善了对象类别的学习; 2)在不断变化的背景下查看对象对于学习丢弃与背景相关的信息至关重要。总体而言,我们得出的结论是,基于时间的增强可以极大地改善对比度学习,从而缩小人工和生物视觉系统之间的差距。
translated by 谷歌翻译
本文研究了两种技术,用于开发有效的自我监督视觉变压器(ESVIT)进行视觉表示学习。首先,我们通过一项全面的实证研究表明,具有稀疏自我生产的多阶段体系结构可以显着降低建模的复杂性,但具有失去捕获图像区域之间细粒度对应关系的能力的成本。其次,我们提出了一项新的区域匹配训练任务,该任务使模型可以捕获细粒的区域依赖性,因此显着提高了学习视觉表示的质量。我们的结果表明,ESVIT在ImageNet线性探针评估上结合两种技术,在ImageNet线性探针评估中获得了81.3%的TOP-1,优于先前的艺术,其较高吞吐量的顺序幅度约为较高。当转移到下游线性分类任务时,ESVIT在18个数据集中的17个中优于其受监督的对方。代码和模型可公开可用:https://github.com/microsoft/esvit
translated by 谷歌翻译
在大规模数据集中训练的最先进的图像分类器(例如ImageNet)已被证明容易受到一系列故意和偶然分配变化的影响。另一方面,已经出现了一些最近具有有利分布(OOD)鲁棒性特性的分类器,在其目标任务上达到了高度准确性,同时保持其在挑战性基准方面的分配精度。我们对广泛发布的模型进行了荟萃分析,其中大多数在过去的十二个月中已经发布。通过这项荟萃分析,我们从经验上确定了所有表现最佳的OOD模型的四个主要共同点,所有这些模型都阐明了视力语言预训练的巨大希望。
translated by 谷歌翻译
将简单的体系结构与大规模预训练相结合已导致图像分类的大量改进。对于对象检测,预训练和缩放方法的确定性不佳,尤其是在长尾和开放式摄影的环境中,训练数据相对较少。在本文中,我们提出了一个强大的配方,用于将图像文本模型转移到开放式对象检测中。我们使用具有最小修改,对比度文本预训练和端到端检测微调的标准视觉变压器体系结构。我们对该设置的缩放属性的分析表明,增加图像级预训练和模型大小在下游检测任务上产生一致的改进。我们提供适应性策略和正规化,以实现零击文本条件和单次图像条件对象检测的非常强劲的性能。代码和型号可在GitHub上找到。
translated by 谷歌翻译
Finetuning image-text models such as CLIP achieves state-of-the-art accuracies on a variety of benchmarks. However, recent works like WiseFT (Wortsman et al., 2021) and LP-FT (Kumar et al., 2022) have shown that even subtle differences in the finetuning process can lead to surprisingly large differences in the final performance, both for in-distribution (ID) and out-of-distribution (OOD) data. In this work, we show that a natural and simple approach of mimicking contrastive pretraining consistently outperforms alternative finetuning approaches. Specifically, we cast downstream class labels as text prompts and continue optimizing the contrastive loss between image embeddings and class-descriptive prompt embeddings (contrastive finetuning). Our method consistently outperforms baselines across 7 distribution shifts, 6 transfer learning, and 3 few-shot learning benchmarks. On WILDS-iWILDCam, our proposed approach FLYP outperforms the top of the leaderboard by $2.3\%$ ID and $2.7\%$ OOD, giving the highest reported accuracy. Averaged across 7 OOD datasets (2 WILDS and 5 ImageNet associated shifts), FLYP gives gains of $4.2\%$ OOD over standard finetuning and outperforms the current state of the art (LP-FT) by more than $1\%$ both ID and OOD. Similarly, on 3 few-shot learning benchmarks, our approach gives gains up to $4.6\%$ over standard finetuning and $4.4\%$ over the state of the art. In total, these benchmarks establish contrastive finetuning as a simple, intuitive, and state-of-the-art approach for supervised finetuning of image-text models like CLIP. Code is available at https://github.com/locuslab/FLYP.
translated by 谷歌翻译
深度神经网络在人类分析中已经普遍存在,增强了应用的性能,例如生物识别识别,动作识别以及人重新识别。但是,此类网络的性能通过可用的培训数据缩放。在人类分析中,对大规模数据集的需求构成了严重的挑战,因为数据收集乏味,廉价,昂贵,并且必须遵守数据保护法。当前的研究研究了\ textit {合成数据}的生成,作为在现场收集真实数据的有效且具有隐私性的替代方案。这项调查介绍了基本定义和方法,在生成和采用合成数据进行人类分析时必不可少。我们进行了一项调查,总结了当前的最新方法以及使用合成数据的主要好处。我们还提供了公开可用的合成数据集和生成模型的概述。最后,我们讨论了该领域的局限性以及开放研究问题。这项调查旨在为人类分析领域的研究人员和从业人员提供。
translated by 谷歌翻译
Vision language (VL) models like CLIP are robust to natural distribution shifts, in part because CLIP learns on unstructured data using a technique called caption supervision; the model inteprets image-linked texts as ground-truth labels. In a carefully controlled comparison study, we show that caption-supervised CNNs trained on a standard cross-entropy loss (with image labels assigned by scanning captions for class names) can exhibit greater distributional robustness than VL models trained on the same data. To facilitate future experiments with high-accuracy caption-supervised models, we introduce CaptionNet (https://github.com/penfever/CaptionNet/), which includes a class-balanced, fully supervised dataset with over 50,000 new human-labeled ImageNet-compliant samples which includes web-scraped captions. In a series of experiments on CaptionNet, we show how the choice of loss function, data filtration and supervision strategy enable robust computer vision. We also provide the codebase necessary to reproduce our experiments at VL Hub (https://github.com/penfever/vlhub/).
translated by 谷歌翻译
视觉变压器(VIT)在各种机器视觉问题上表现出令人印象深刻的性能。这些模型基于多头自我关注机制,可以灵活地参加一系列图像修补程序以编码上下文提示。一个重要问题是在给定贴片上参加图像范围内的上下文的这种灵活性是如何促进在自然图像中处理滋扰,例如,严重的闭塞,域移位,空间置换,对抗和天然扰动。我们通过广泛的一组实验来系统地研究了这个问题,包括三个vit家族和具有高性能卷积神经网络(CNN)的比较。我们展示和分析了vit的以下迷恋性质:(a)变压器对严重闭塞,扰动和域移位高度稳健,例如,即使在随机堵塞80%的图像之后,也可以在想象中保持高达60%的前1个精度。内容。 (b)与局部纹理的偏置有抗闭锁的强大性能,与CNN相比,VITS对纹理的偏置显着偏差。当受到适当训练以编码基于形状的特征时,VITS展示与人类视觉系统相当的形状识别能力,以前在文献中无与伦比。 (c)使用VIT来编码形状表示导致准确的语义分割而没有像素级监控的有趣后果。 (d)可以组合从单VIT模型的现成功能,以创建一个功能集合,导致传统和几枪学习范例的一系列分类数据集中的高精度率。我们显示VIT的有效特征是由于自我关注机制可以实现灵活和动态的接受领域。
translated by 谷歌翻译
We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% -15% on CIFAR-10 and 11% -14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly "harder" images than those found in the original test sets.
translated by 谷歌翻译
尽管最近通过剩余网络的代表学习中的自我监督方法取得了进展,但它们仍然对ImageNet分类基准进行了高度的监督学习,限制了它们在性能关键设置中的适用性。在MITROVIC等人的现有理论上洞察中建立2021年,我们提出了RELICV2,其结合了明确的不变性损失,在各种适当构造的数据视图上具有对比的目标。 Relicv2在ImageNet上实现了77.1%的前1个分类准确性,使用线性评估使用Reset50架构和80.6%,具有较大的Reset型号,优于宽边缘以前的最先进的自我监督方法。最值得注意的是,RelicV2是使用一系列标准Reset架构始终如一地始终优先于类似的对比较中的监督基线的第一个表示学习方法。最后,我们表明,尽管使用Reset编码器,Relicv2可与最先进的自我监控视觉变压器相媲美。
translated by 谷歌翻译