Modern deep neural networks tend to be evaluated on static test sets. One shortcoming of this is the fact that these deep neural networks cannot be easily evaluated for robustness issues with respect to specific scene variations. For example, it is hard to study the robustness of these networks to variations of object scale, object pose, scene lighting and 3D occlusions. The main reason is that collecting real datasets with fine-grained naturalistic variations of sufficient scale can be extremely time-consuming and expensive. In this work, we present Counterfactual Simulation Testing, a counterfactual framework that allows us to study the robustness of neural networks with respect to some of these naturalistic variations by building realistic synthetic scenes that allow us to ask counterfactual questions to the models, ultimately providing answers to questions such as "Would your classification still be correct if the object were viewed from the top?" or "Would your classification still be correct if the object were partially occluded by another object?". Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers, with respect to these naturalistic variations. We find evidence that ConvNext is more robust to pose and scale variations than Swin, that ConvNext generalizes better to our simulated domain and that Swin handles partial occlusion better than ConvNext. We also find that robustness for all networks improves with network scale and with data scale and variety. We release the Naturalistic Variation Object Dataset (NVD), a large simulated dataset of 272k images of everyday objects with naturalistic variations such as object pose, scale, viewpoint, lighting and occlusions. Project page: https://counterfactualsimulation.github.io
translated by 谷歌翻译
如果要成功部署在高风险现实世界应用程序(例如自动驾驶汽车)中,则深层网络应对罕见事件具有强大的核心。在这里,我们研究了深网识别异常姿势对象的能力。我们创建了一个在异常方向上的对象图像的合成数据集,并评估了38个最新且竞争性深网的鲁棒性,用于图像分类。我们表明,对所有测试的网络进行分类仍然是一个挑战,与直立物体显示时,平均准确度下降了29.5%。这种脆弱性在很大程度上不受各种网络设计选择的影响,例如培训损失(例如,有监督与自我监督),架构(例如,卷积网络与变形金刚),数据集模式(例如,图像与图像 - text对) ,以及数据授权方案。但是,在非常大的数据集上训练的网络基本上要优于其他培训,最好的网络测试了$ \ unicode {x2014} $ noisy学生Efficentnet-L2接受了JFT-300m $ \ unicode {x2014} $的训练,只有相对较小的准确率,仅准确14.5百分比不寻常的姿势。然而,对嘈杂学生的失败的视觉检查表明,与人类视觉系统的稳定性存在剩余差距。此外,结合多个对象转换$ \ unicode {x2014} $ 3D旋转并缩放$ \ unicode {x2014} $进一步降低了所有网络的性能。总的来说,我们的结果提供了对深网的鲁棒性的另一种衡量,在现实世界中使用它们时要考虑的重要性很重要。代码和数据集可在https://github.com/amro-kamal/objectpose上找到。
translated by 谷歌翻译
由于分布式概括是一个普遍不足的问题,因此在不同的研究计划中研究了各种代理目标(例如,校准,对抗性鲁棒性,算法腐败,跨轮班的不变性),导致不同的研究计划,从而提出不同的建议。在共享相同的抱负目标的同时,这些方法从未在相同的实验条件下对真实数据进行测试。在本文中,我们对以前的工作进行了统一的看法,突出了我们经验解决的消息差异,并提供有关如何衡量模型鲁棒性以及如何改进它的建议。为此,我们收集了172个公开可用的数据集对,用于培训和分布外评估准确性,校准错误,对抗性攻击,环境不变性和合成腐败。我们从九个不同的架构中的九个不同的架构中微调了31k网络。我们的发现证实,分布的精度往往会共同增加,但表明它们的关系在很大程度上取决于数据集依赖性,并且通常比以前较小的规模研究所提出的更加细微和更复杂。
translated by 谷歌翻译
尽管最近计算机愿景取得了成功,但仍有新的途径探索。在这项工作中,我们提出了一个新的数据集来调查自动阻塞对深神经网络的影响。随着TEOS(自动阻塞的效果),我们提出了一个3D阻挡世界数据集,专注于3D对象的几何形状及其对自动阻塞的全能挑战。我们设计了TEOS,以调查自动阻塞在对象分类的范围内的作用。尽管对象分类中已经出现了显着进展,但自遮挡是一个挑战。在现实世界中,3D对象的自我遮挡仍然具有深入学习方法的重大挑战。然而,人类通过部署复杂策略来处理这一点,例如,通过更改观点或操纵场景来收集必要的信息。使用TEOS,我们分别介绍了两个难度级别(L1和L2),包含36和12个对象的数据集。我们提供738个均匀采样的每个物体采样视图,它们的掩模,对象和相机位置,方向,自遮挡量以及每个物体的CAD模型。我们提出了与五个知名分类的深度神经网络的基线评估,并表明TEOS对所有人构成了重大挑战。数据集以及预先训练的型号,在HTTPS://nvision2.data.eecs.yorku.ca/eyos下公开可用于科学界。
translated by 谷歌翻译
视觉变压器(VIT)在各种机器视觉问题上表现出令人印象深刻的性能。这些模型基于多头自我关注机制,可以灵活地参加一系列图像修补程序以编码上下文提示。一个重要问题是在给定贴片上参加图像范围内的上下文的这种灵活性是如何促进在自然图像中处理滋扰,例如,严重的闭塞,域移位,空间置换,对抗和天然扰动。我们通过广泛的一组实验来系统地研究了这个问题,包括三个vit家族和具有高性能卷积神经网络(CNN)的比较。我们展示和分析了vit的以下迷恋性质:(a)变压器对严重闭塞,扰动和域移位高度稳健,例如,即使在随机堵塞80%的图像之后,也可以在想象中保持高达60%的前1个精度。内容。 (b)与局部纹理的偏置有抗闭锁的强大性能,与CNN相比,VITS对纹理的偏置显着偏差。当受到适当训练以编码基于形状的特征时,VITS展示与人类视觉系统相当的形状识别能力,以前在文献中无与伦比。 (c)使用VIT来编码形状表示导致准确的语义分割而没有像素级监控的有趣后果。 (d)可以组合从单VIT模型的现成功能,以创建一个功能集合,导致传统和几枪学习范例的一系列分类数据集中的高精度率。我们显示VIT的有效特征是由于自我关注机制可以实现灵活和动态的接受领域。
translated by 谷歌翻译
最近有一个浪涌的方法,旨在以无监督的方式分解和分段场景,即无监督的多对象分段。执行此类任务是计算机愿景的长期目标,提供解锁对象级推理,而无需致密的注释来列车分段模型。尽管取得了重大进展,但在视觉上简单的场景上开发和培训了当前的模型,描绘了纯背景上的单色物体。然而,自然界在视觉上复杂,与多样化的纹理和复杂的照明效果等混杂方面。在这项研究中,我们展示了一个名为Clevrtex的新基准,设计为比较,评估和分析算法的下一个挑战。 CLEVRTEX采用具有不同形状,纹理和光映射材料的合成场景,采用物理基于渲染技术创建。它包括图50k示例,描绘了在背景上布置的3-10个对象,使用60材料的目录创建,以及使用25种不同材料创建的10k图像的另一测试集。我们在CLEVRTEX上基准最近近期无监督的多对象分段模型,并找到所有最先进的方法无法在纹理环境中学习良好的陈述,尽管在更简单的数据上表现令人印象深刻。我们还创建了Clevrtex DataSet的变体,控制了场景复杂性的不同方面,并探讨了各个缺点的当前方法。数据集和代码可在https://www.robots.ox.ac.uk/~vgg/research/clevrtex中获得。
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
增强了现实世界情景的稳健性已经被证明非常具有挑战性。一个原因是现有的鲁棒性基准是有限的,因为它们依赖于合成数据,或者它们只是将稳健性降低为数据集之间的概括,因此忽略各个滋扰因素的影响。在这项工作中,我们介绍了罗宾,是一个基准数据集,用于诊断视觉算法对现实世界中的个人滋扰的鲁棒性。罗宾在Pascal VOC 2012和Imagenet数据集中构建了10个刚性类别,并包括对象的分布示例3D姿势,形状,纹理,背景和天气状况。 Robin是丰富的注释,以实现图像分类,对象检测和3D姿势估计的基准模型。我们为许多流行的基线提供了结果,并进行了几个有趣的观察结果:1。与其他人相比,一些滋扰因素对性能有更强烈的负面影响。此外,对oodnuisance的负面影响取决于下游视觉任务。 2.利用强大数据增强的鲁棒性的目前的方法只有在现实世界的情况下只有边际效应,有时甚至会降低表现。 3.我们在鲁棒性方面,我们不会遵守卷积和变压器架构之间的任何显着差异。我们相信我们的数据集提供了丰富的试验台,以研究视觉算法的稳健性,并有助于大大推动该领域的前瞻性研究。
translated by 谷歌翻译
传统上,本征成像或内在图像分解被描述为将图像分解为两层:反射率,材料的反射率;和一个阴影,由光和几何之间的相互作用产生。近年来,深入学习技术已广泛应用,以提高这些分离的准确性。在本调查中,我们概述了那些在知名内在图像数据集和文献中使用的相关度量的结果,讨论了预测所需的内在图像分解的适用性。虽然Lambertian的假设仍然是许多方法的基础,但我们表明,对图像形成过程更复杂的物理原理组件的潜力越来越意识到,这是光学准确的材料模型和几何形状,更完整的逆轻型运输估计。考虑使用的前瞻和模型以及驾驶分解过程的学习架构和方法,我们将这些方法分类为分解的类型。考虑到最近神经,逆和可微分的渲染技术的进步,我们还提供了关于未来研究方向的见解。
translated by 谷歌翻译
这项调查回顾了对基于视觉的自动驾驶系统进行行为克隆训练的解释性方法。解释性的概念具有多个方面,并且需要解释性的驾驶强度是一种安全至关重要的应用。从几个研究领域收集贡献,即计算机视觉,深度学习,自动驾驶,可解释的AI(X-AI),这项调查可以解决几点。首先,它讨论了从自动驾驶系统中获得更多可解释性和解释性的定义,上下文和动机,以及该应用程序特定的挑战。其次,以事后方式为黑盒自动驾驶系统提供解释的方法是全面组织和详细的。第三,详细介绍和讨论了旨在通过设计构建更容易解释的自动驾驶系统的方法。最后,确定并检查了剩余的开放挑战和潜在的未来研究方向。
translated by 谷歌翻译
为了在看不见的看不见和潜在的超出分布样品上,希望机器学习模型具有关于影响输入变化因子的变换的可预测响应。在这里,我们研究了几种类型的归纳偏见对这种可预测行为的相对重要性:数据的选择,他们的增强和模型架构。通过手工工程数据增强通常实现不变性,但是进行标准数据增强地址转换,用于解释实际数据的变化?虽然事先工作专注于合成数据,但我们在此尝试表征真实数据集,想象成的变化因素,并研究标准残余网络的不变性以及最近提出的视觉变压器关于这些因素的变化。我们展示了标准的增强依赖于平移和规模的精确组合,在翻译回顾大部分性能改进 - 尽管在卷积架构(如剩余网络)中建立的(近似)翻译不变性。事实上,我们发现规模和翻译不变性在剩余网络和视觉变压器模型中类似于它们显着不同的架构感应偏差。我们显示培训数据本身是不变性的主要来源,数据增强只会进一步增加所学到的InorRARCE。值得注意的是,在训练期间学习的修正因与我们发现的想象成分对齐。最后,我们发现想象成的变化的主要因素主要与外观有关,并且特定于每个班级。
translated by 谷歌翻译
尽管已经提出了几种方法来实现领域泛化的艰巨任务,但了解使这项任务挑战的原因很少受到关注。在这里,我们提出semanticdg(语义域概括):具有15个具有相同几何形状,场景布局和摄像机参数与流行的3D Scannet数据集的基准标准,但具有照明,材料和视图点的控制域移动。使用此基准,我们独立研究了这些语义转变对概括的影响。视觉识别模型很容易推广到新颖的照明,但与材料和观点的分配变化斗争。受到人类视野的启发,我们假设场景上下文可以作为桥梁,以帮助模型跨越材料和观点域的转移,并提出上下文感知的视觉变压器,以及对材料和观点变化的对比损失,以解决这些域的变化。我们的方法(称为CDCNET)的表现优于现有域的概括方法,超过18%。作为关键的基准,我们还进行心理物理学实验,发现人类在照明,材料和观点上同样概括地概括了。此处介绍的基准和计算模型有助于了解与跨域的概括相关的挑战,并提供了向语义分布转移推断的初始步骤。我们在补充中包括所有数据和源代码。
translated by 谷歌翻译
We collect a large real-world test set, ObjectNet, for object recognition with controls where object backgrounds, rotations, and imaging viewpoints are random. Most scientific experiments have controls, confounds which are removed from the data, to ensure that subjects cannot perform a task by exploiting trivial correlations in the data. Historically, large machine learning and computer vision datasets have lacked such controls. This has resulted in models that must be fine-tuned for new datasets and perform better on datasets than in real-world applications. When tested on ObjectNet, object detectors show a 40-45% drop in performance, with respect to their performance on other benchmarks, due to the controls for biases. Controls make ObjectNet robust to fine-tuning showing only small performance increases. We develop a highly automated platform that enables gathering datasets with controls by crowdsourcing image capturing and annotation. ObjectNet is the same size as the ImageNet test set (50,000 images), and by design does not come paired with a training set in order to encourage generalization. The dataset is both easier than ImageNet -objects are largely centered and unoccluded -and harder, due to the controls. Although we focus on object recognition here, data with controls can be gathered at scale using automated tools throughout machine learning to generate datasets that exercise models in new ways thus providing valuable feedback to researchers. This work opens up new avenues for research in generalizable, robust, and more human-like computer vision and in creating datasets where results are predictive of real-world performance.
translated by 谷歌翻译
Multi-view projection techniques have shown themselves to be highly effective in achieving top-performing results in the recognition of 3D shapes. These methods involve learning how to combine information from multiple view-points. However, the camera view-points from which these views are obtained are often fixed for all shapes. To overcome the static nature of current multi-view techniques, we propose learning these view-points. Specifically, we introduce the Multi-View Transformation Network (MVTN), which uses differentiable rendering to determine optimal view-points for 3D shape recognition. As a result, MVTN can be trained end-to-end with any multi-view network for 3D shape classification. We integrate MVTN into a novel adaptive multi-view pipeline that is capable of rendering both 3D meshes and point clouds. Our approach demonstrates state-of-the-art performance in 3D classification and shape retrieval on several benchmarks (ModelNet40, ScanObjectNN, ShapeNet Core55). Further analysis indicates that our approach exhibits improved robustness to occlusion compared to other methods. We also investigate additional aspects of MVTN, such as 2D pretraining and its use for segmentation. To support further research in this area, we have released MVTorch, a PyTorch library for 3D understanding and generation using multi-view projections.
translated by 谷歌翻译
Different types of mental rotation tests have been used extensively in psychology to understand human visual reasoning and perception. Understanding what an object or visual scene would look like from another viewpoint is a challenging problem that is made even harder if it must be performed from a single image. We explore a controlled setting whereby questions are posed about the properties of a scene if that scene was observed from another viewpoint. To do this we have created a new version of the CLEVR dataset that we call CLEVR Mental Rotation Tests (CLEVR-MRT). Using CLEVR-MRT we examine standard methods, show how they fall short, then explore novel neural architectures that involve inferring volumetric representations of a scene. These volumes can be manipulated via camera-conditioned transformations to answer the question. We examine the efficacy of different model variants through rigorous ablations and demonstrate the efficacy of volumetric representations.
translated by 谷歌翻译
Machine learning models are known to be susceptible to adversarial perturbation. One famous attack is the adversarial patch, a sticker with a particularly crafted pattern that makes the model incorrectly predict the object it is placed on. This attack presents a critical threat to cyber-physical systems that rely on cameras such as autonomous cars. Despite the significance of the problem, conducting research in this setting has been difficult; evaluating attacks and defenses in the real world is exceptionally costly while synthetic data are unrealistic. In this work, we propose the REAP (REalistic Adversarial Patch) benchmark, a digital benchmark that allows the user to evaluate patch attacks on real images, and under real-world conditions. Built on top of the Mapillary Vistas dataset, our benchmark contains over 14,000 traffic signs. Each sign is augmented with a pair of geometric and lighting transformations, which can be used to apply a digitally generated patch realistically onto the sign. Using our benchmark, we perform the first large-scale assessments of adversarial patch attacks under realistic conditions. Our experiments suggest that adversarial patch attacks may present a smaller threat than previously believed and that the success rate of an attack on simpler digital simulations is not predictive of its actual effectiveness in practice. We release our benchmark publicly at https://github.com/wagner-group/reap-benchmark.
translated by 谷歌翻译
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
We study how robust current ImageNet models are to distribution shifts arising from natural variations in datasets. Most research on robustness focuses on synthetic image perturbations (noise, simulated weather artifacts, adversarial examples, etc.), which leaves open how robustness on synthetic distribution shift relates to distribution shift arising in real data. Informed by an evaluation of 204 ImageNet models in 213 different test conditions, we find that there is often little to no transfer of robustness from current synthetic to natural distribution shift. Moreover, most current techniques provide no robustness to the natural distribution shifts in our testbed. The main exception is training on larger and more diverse datasets, which in multiple cases increases robustness, but is still far from closing the performance gaps. Our results indicate that distribution shifts arising in real data are currently an open research problem. We provide our testbed and data as a resource for future work at https://modestyachts.github.io/imagenet-testbed/.
translated by 谷歌翻译
我们介绍了几个新的数据集即想象的A / O和Imagenet-R以及合成环境和测试套件,我们称为CAOS。 Imagenet-A / O允许研究人员专注于想象成剩余的盲点。由于追踪稳健的表示,以特殊创建了ImageNet-R,因为表示不再简单地自然,而是包括艺术和其他演绎。 Caos Suite由Carla Simulator构建,允许包含异常物体,可以创建可重复的合成环境和用于测试稳健性的场景。所有数据集都是为测试鲁棒性和衡量鲁棒性的衡量进展而创建的。数据集已用于各种其他作品中,以衡量其具有鲁棒性的自身进步,并允许切向进展,这些进展不会完全关注自然准确性。鉴于这些数据集,我们创建了几种旨在推进鲁棒性研究的新方法。我们以最大Logit的形式和典型程度的形式构建简单的基线,并以深度的形式创建新的数据增强方法,从而提高上述基准。最大Logit考虑Logit值而不是SoftMax操作后的值,而微小的变化会产生明显的改进。典型程分将输出分布与类的后部分布进行比较。我们表明,除了分段任务之外,这将提高对基线的性能。猜测可能在像素级别,像素的语义信息比类级信息的语义信息不太有意义。最后,新的Deepaulment的新增强技术利用神经网络在彻底不同于先前使用的传统几何和相机的转换的图像上创建增强。
translated by 谷歌翻译