尽管最先进的物体检测方法显示出令人信服的性能,但模型通常对对抗的攻击和分发数据不稳健。我们介绍了一个新的数据集,天然对手对象(Nao),以评估物体检测模型的稳健性。 Nao包含7,934个图像和9,943个对象,这些对象未经修改,代表了现实世界的情景,但导致最先进的检测模型以高信任误入歧途。与标准MSCOCO验证集相比,在NAO上评估时,高效的平均平均精度(MAP)降低74.5%。此外,通过比较各种对象检测架构,我们发现Mscoco验证集上的更好性能不一定转化为NAO的更好性能,这表明不能通过培训更准确的模型来简单地实现鲁棒性。我们进一步调查为什么NA​​O中难以检测和分类的原因。洗牌图像贴片的实验表明,模型对局部质地过于敏感。此外,使用集成梯度和背景替换,我们发现检测模型依赖于边界框内的像素信息,并且在预测类标签时对背景上下文不敏感。 Nao可以在https://drive.google.com/drive/folders/15p8sowojku6sseihlets86orfytgezi8下载。
translated by 谷歌翻译
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the chal-
translated by 谷歌翻译
The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.
translated by 谷歌翻译
Progress on object detection is enabled by datasets that focus the research community's attention on open challenges. This process led us from simple images to complex scenes and from bounding boxes to segmentation masks. In this work, we introduce LVIS (pronounced 'el-vis'): a new dataset for Large Vocabulary Instance Segmentation. We plan to collect ∼2 million high-quality instance segmentation masks for over 1000 entry-level object categories in 164k images. Due to the Zipfian distribution of categories in natural images, LVIS naturally has a long tail of categories with few training samples. Given that state-of-the-art deep learning methods for object detection perform poorly in the low-sample regime, we believe that our dataset poses an important and exciting new scientific challenge. LVIS is available at http://www.lvisdataset.org.
translated by 谷歌翻译
对抗贴片是旨在欺骗其他表现良好的基于​​神经网络的计算机视觉模型的图像。尽管这些攻击最初是通过数字方式构想和研究的,但由于图像的原始像素值受到干扰,但最近的工作表明,这些攻击可以成功地转移到物理世界中。可以通过打印补丁并将其添加到新捕获的图像或视频素材的场景中来实现。在这项工作中,我们进一步测试了在更具挑战性的条件下物理世界中对抗斑块攻击的功效。我们考虑通过空中或卫星摄像机获得的高架图像训练的对象检测模型,并测试插入沙漠环境场景中的物理对抗斑块。我们的主要发现是,在这些条件下成功实施对抗贴片攻击要比在先前考虑的条件下更难。这对AI安全具有重要意义,因为可能被夸大了对抗性例子所带来的现实世界威胁。
translated by 谷歌翻译
对图像分类的侵扰贴片攻击攻击图像的深度神经网络(DNN),其在图像的有界区域内注射任意扭曲,可以产生鲁棒(IE在物理世界中的侵犯)和普遍(即,在任何情况下保持对抗的侵犯扰动输入)。这种攻击可能导致现实世界的DNN系统中的严重后果。这项工作提出了jujutsu,一种检测和减轻稳健和普遍的对抗性补丁攻击的技术。对于检测,jujutsu利用攻击“通用属性 - jujutsu首先定位潜在的对抗性补丁区域,然后策略性地将其传送到新图像中的专用区域,以确定它是否真正恶意。对于攻击缓解,jujutsu通过图像修正来利用攻击本地化性质,以在攻击损坏的像素中综合语义内容,并重建“清洁”图像。我们在四个不同的数据集中评估jujutsu(想象成,想象力,celeba和place365),并表明Jujutsu实现了卓越的性能,并且显着优于现有技术。我们发现jujutsu可以进一步防御基本攻击的不同变体,包括1)物理攻击; 2)目标不同课程的攻击; 3)攻击构造不同形状和4)适应攻击的修补程序。
translated by 谷歌翻译
互动对象理解,或者我们可以对对象做些什么以及计算机愿景的长期目标。在本文中,我们通过观察野外的自我高端视频的人类手来解决这个问题。我们展示了观察人类的手与之交互以及如何提供相关数据和必要的监督。参加双手,容易定位并稳定积极的物体以进行学习,并揭示发生与对象的交互的地方。分析手显示我们可以对物体做些什么以及如何做些。我们在史诗厨房数据集上应用这些基本原则,并成功地学习了国家敏感的特征,以及互动区域和提供了麦克拉斯的地区),纯粹是通过观察在EGoCentric视频中的手。
translated by 谷歌翻译
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
translated by 谷歌翻译
在本文中,我们提出了一种与渔业相关数据的方法,该方法使我们能够通过多个可以利用众包接口的培训和生产循环在数据集上迭代标记的图像数据集。我们将算法及其结果介绍在使用海底自动水下车辆收集的两组单独的图像数据上。第一个数据集由2,026个完全未标记的图像组成,而第二个数据集由21,968张图像组成,这些图像由专家注释。我们的结果表明,使用小子集进行培训,并迭代以构建较大的标记数据,从而使我们能够收敛到带有少量迭代的完全注释数据集。即使在专家标记的数据集的情况下,该方法论的单个迭代也通过发现与鱼层相关的鱼类相关标签的其他复杂示例,也很小,或者被与水下图像相关的对比度限制所掩盖,从而改善了标签。
translated by 谷歌翻译
我们介绍了RP2K,这是一个新的大型零售产品数据集,用于细粒度图像分类。与以前的数据集专注于相对较少的产品,我们在归属于2000年不同产品的架子上收集超过500,000件零售产品图像。我们的数据集旨在推进零售对象识别的研究,该研究具有大量应用,如自动架审计和基于图像的产品信息检索。我们的数据集享受以下属性:(1)它是迄今为止产品类别的最大规模数据集。(2)所有图像在具有自然灯光的物理零售商店手动捕获,匹配真实应用的场景。(3)我们为每个对象提供丰富的注释,包括尺寸,形状和口味/气味。我们相信我们的数据集可以使计算机视觉研究和零售业受益。我们的数据集在https://www.pinlandata.com/rp2k_dataset上公开提供。
translated by 谷歌翻译
我们介绍了几个新的数据集即想象的A / O和Imagenet-R以及合成环境和测试套件,我们称为CAOS。 Imagenet-A / O允许研究人员专注于想象成剩余的盲点。由于追踪稳健的表示,以特殊创建了ImageNet-R,因为表示不再简单地自然,而是包括艺术和其他演绎。 Caos Suite由Carla Simulator构建,允许包含异常物体,可以创建可重复的合成环境和用于测试稳健性的场景。所有数据集都是为测试鲁棒性和衡量鲁棒性的衡量进展而创建的。数据集已用于各种其他作品中,以衡量其具有鲁棒性的自身进步,并允许切向进展,这些进展不会完全关注自然准确性。鉴于这些数据集,我们创建了几种旨在推进鲁棒性研究的新方法。我们以最大Logit的形式和典型程度的形式构建简单的基线,并以深度的形式创建新的数据增强方法,从而提高上述基准。最大Logit考虑Logit值而不是SoftMax操作后的值,而微小的变化会产生明显的改进。典型程分将输出分布与类的后部分布进行比较。我们表明,除了分段任务之外,这将提高对基线的性能。猜测可能在像素级别,像素的语义信息比类级信息的语义信息不太有意义。最后,新的Deepaulment的新增强技术利用神经网络在彻底不同于先前使用的传统几何和相机的转换的图像上创建增强。
translated by 谷歌翻译
Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite efforts of the community in data collection, there are still few image datasets covering a wide range of scenes and object categories with pixel-wise annotations for scene understanding. In this work, we present a densely annotated dataset ADE20K, which spans diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. Totally there are 25k images of the complex everyday scenes containing a variety of objects in their natural spatial context. On average there are 19.5 instances and 10.5 object classes per image. Based on ADE20K, we construct benchmarks for scene parsing and instance segmentation. We provide baseline performances on both of the benchmarks and re-implement the state-ofthe-art models for open source. We further evaluate the effect of synchronized batch normalization and find that a reasonably large batch size is crucial for the semantic segmentation performance. We show that the networks trained on ADE20K are able to segment a wide variety of scenes and objects 1 .
translated by 谷歌翻译
Existing image classification datasets used in computer vision tend to have a uniform distribution of images across object categories. In contrast, the natural world is heavily imbalanced, as some species are more abundant and easier to photograph than others. To encourage further progress in challenging real world conditions we present the iNaturalist species classification and detection dataset, consisting of 859,000 images from over 5,000 different species of plants and animals. It features visually similar species, captured in a wide variety of situations, from all over the world. Images were collected with different camera types, have varying image quality, feature a large class imbalance, and have been verified by multiple citizen scientists. We discuss the collection of the dataset and present extensive baseline experiments using state-of-the-art computer vision classification and detection models. Results show that current nonensemble based methods achieve only 67% top one classification accuracy, illustrating the difficulty of the dataset. Specifically, we observe poor results for classes with small numbers of training examples suggesting more attention is needed in low-shot learning.
translated by 谷歌翻译
Object detectors, which are widely deployed in security-critical systems such as autonomous vehicles, have been found vulnerable to patch hiding attacks. An attacker can use a single physically-realizable adversarial patch to make the object detector miss the detection of victim objects and undermine the functionality of object detection applications. In this paper, we propose ObjectSeeker for certifiably robust object detection against patch hiding attacks. The key insight in ObjectSeeker is patch-agnostic masking: we aim to mask out the entire adversarial patch without knowing the shape, size, and location of the patch. This masking operation neutralizes the adversarial effect and allows any vanilla object detector to safely detect objects on the masked images. Remarkably, we can evaluate ObjectSeeker's robustness in a certifiable manner: we develop a certification procedure to formally determine if ObjectSeeker can detect certain objects against any white-box adaptive attack within the threat model, achieving certifiable robustness. Our experiments demonstrate a significant (~10%-40% absolute and ~2-6x relative) improvement in certifiable robustness over the prior work, as well as high clean performance (~1% drop compared with undefended models).
translated by 谷歌翻译
本文推动了在图像中分解伪装区域的信封,成了有意义的组件,即伪装的实例。为了促进伪装实例分割的新任务,我们将在数量和多样性方面引入DataSet被称为Camo ++,该数据集被称为Camo ++。新数据集基本上增加了具有分层像素 - 明智的地面真理的图像的数量。我们还为伪装实例分割任务提供了一个基准套件。特别是,我们在各种场景中对新构造的凸轮++数据集进行了广泛的评估。我们还提出了一种伪装融合学习(CFL)伪装实例分割框架,以进一步提高最先进的方法的性能。数据集,模型,评估套件和基准测试将在我们的项目页面上公开提供:https://sites.google.com/view/ltnghia/research/camo_plus_plus
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译
在对象检测中,当检测器未能检测到目标对象时,会出现假阴性。为了了解为什么对象检测产生假阴性,我们确定了五个“假负机制”,其中每个机制都描述了检测器体系结构内部的特定组件如何失败。着眼于两阶段和一阶段锚点对象检测器体系结构,我们引入了一个框架,用于量化这些虚假的负面机制。使用此框架,我们调查了为什么更快的R-CNN和视网膜无法检测基准视觉数据集和机器人数据集中的对象。我们表明,检测器的假负机制在计算机视觉基准数据集和机器人部署方案之间存在显着差异。这对为机器人应用程序开发的对象检测器的翻译具有影响。
translated by 谷歌翻译
我们表明,将人类的先验知识与端到端学习相结合可以通过引入基于零件的对象分类模型来改善深神经网络的鲁棒性。我们认为,更丰富的注释形式有助于指导神经网络学习更多可靠的功能,而无需更多的样本或更大的模型。我们的模型将零件分割模型与一个微小的分类器结合在一起,并经过训练的端到端,以同时将对象分割为各个部分,然后对分段对象进行分类。从经验上讲,与所有三个数据集的Resnet-50基线相比,我们的基于部分的模型既具有更高的精度和更高的对抗性鲁棒性。例如,鉴于相同的鲁棒性,我们部分模型的清洁准确性高达15个百分点。我们的实验表明,这些模型还减少了纹理偏见,并对共同的腐败和虚假相关性产生更好的鲁棒性。该代码可在https://github.com/chawins/adv-part-model上公开获得。
translated by 谷歌翻译
对抗斑块产生的标准方法导致嘈杂的显着模式,这些模式很容易被人类识别。最近的研究提出了几种使用生成对抗网络(GAN)生成自然斑块的方法,但在对象检测用例中只评估了其中的一些方法。此外,技术的状态主要集中于通过直接与补丁重叠的输入中抑制一个大边界框。补丁附近的抑制对象是一项不同的,更复杂的任务。在这项工作中,我们评估了现有的方法,以生成不起眼的补丁。我们已经针对不同的计算机视觉任务而开发的适应方法,用于Yolov3和CoCo数据集的对象检测用例。我们已经评估了两种生成自然主义斑块的方法:通过将斑块的产生纳入GAN训练过程和使用预审计的GAN。在这两种情况下,我们都评估了性能和自然主义斑块外观之间的权衡。我们的实验表明,使用预先训练的GAN有助于获得逼真的斑块,同时保留类似于常规的对抗斑块的性能。
translated by 谷歌翻译
Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by adversarial examples that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives.This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
translated by 谷歌翻译