Many visualization techniques have been created to help explain the behavior of convolutional neural networks (CNNs), but they largely consist of static diagrams that convey limited information. Interactive visualizations can provide more rich insights and allow users to more easily explore a model's behavior; however, they are typically not easily reusable and are specific to a particular model. We introduce Visual Feature Search, a novel interactive visualization that is generalizable to any CNN and can easily be incorporated into a researcher's workflow. Our tool allows a user to highlight an image region and search for images from a given dataset with the most similar CNN features. It supports searching through large image datasets with an efficient cache-based search implementation. We demonstrate how our tool elucidates different aspects of model behavior by performing experiments on supervised, self-supervised, and human-edited CNNs. We also release a portable Python library and several IPython notebooks to enable researchers to easily use our tool in their own experiments. Our code can be found at https://github.com/lookingglasslab/VisualFeatureSearch.
translated by 谷歌翻译
可解释的人工智能(XAI)的新兴领域旨在为当今强大但不透明的深度学习模型带来透明度。尽管本地XAI方法以归因图的形式解释了个体预测,从而确定了重要特征的发生位置(但没有提供有关其代表的信息),但全局解释技术可视化模型通常学会的编码的概念。因此,两种方法仅提供部分见解,并留下将模型推理解释的负担。只有少数当代技术旨在将本地和全球XAI背后的原则结合起来,以获取更多信息的解释。但是,这些方法通常仅限于特定的模型体系结构,或对培训制度或数据和标签可用性施加其他要求,这实际上使事后应用程序成为任意预训练的模型。在这项工作中,我们介绍了概念相关性传播方法(CRP)方法,该方法结合了XAI的本地和全球观点,因此允许回答“何处”和“ where”和“什么”问题,而没有其他约束。我们进一步介绍了相关性最大化的原则,以根据模型对模型的有用性找到代表性的示例。因此,我们提高了对激活最大化及其局限性的共同实践的依赖。我们证明了我们方法在各种环境中的能力,展示了概念相关性传播和相关性最大化导致了更加可解释的解释,并通过概念图表,概念组成分析和概念集合和概念子区和概念子区和概念子集和定量研究对模型的表示和推理提供了深刻的见解。它们在细粒度决策中的作用。
translated by 谷歌翻译
在许多高风险应用中,人工智能(AI)的预测越来越重要,甚至是必要的,而人类是最终的决策者。在这项工作中,我们提出了两种自我解剖图像分类器的新型架构,这些架构首先解释,然后通过利用查询图像和示例之间的视觉对应关系来预测(与事后解释)。我们的模型始终在分布(OOD)数据集上始终改进(提高1-4分),同时在分布测试中略差(比Resnet-50)和$ k $ near的邻居分类器更差(1至2分)。 (KNN)。通过大规模的人类对成像网和幼崽的研究,我们基于对应的解释对用户的解释比KNN解释更有用。我们的解释可帮助用户更准确地拒绝AI的错误决策,而不是所有其他测试方法。有趣的是,我们首次表明,在ImageNet和Cub图像分类任务中,有可能实现互补的人类团队的准确性(即比Ai-Olone或单词更高)。
translated by 谷歌翻译
We propose a technique for producing 'visual explanations' for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable.Our approach -Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say 'dog' in a classification network or a sequence of words in captioning network) flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept.Unlike previous approaches, Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fullyconnected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. visual question answering) or reinforcement learning, all without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative vi-
translated by 谷歌翻译
我们提出Simprov-可扩展的图像出处框架,将查询图像匹配回到可信的原始数据库,并在查询上确定可能的操作。 Simprov由三个阶段组成:检索Top-K最相似图像的可扩展搜索阶段;一个重新排列和近乎解复的检测阶段,用于识别候选人之间的原件;最后,在查询中定位区域的操纵检测和可视化阶段可能被操纵与原始区域不同。 Simprov对在线再分配过程中通常发生的良性图像转换非常强大,例如由于噪声和重新压缩降解而引起的工件,以及由于图像填充,翘曲,尺寸和形状的变化而引起的过度转换。通过对比较器体系结构中可区分的翘曲模块的端到端训练,可以实现对实地转换的鲁棒性。我们证明了对1亿张图像的数据集的有效检索和操纵检测。
translated by 谷歌翻译
视觉反事实解释用来自干扰器图像的区域代替了查询图像中的图像区域,以使系统对转换图像的决策变为干扰器类。在这项工作中,我们提出了一个新颖的框架,用于根据两个关键思想计算视觉反事实说明。首先,我们强制执行替换和替换区域包含相同的语义部分,从而产生了更加一致的解释。其次,我们以计算上有效的方式使用多个干扰器图像,并获得更少的区域替代方法的更多歧视性解释。我们的方法在语义上一致性高27%,并且比三个细粒图像识别数据集的竞争方法要快27%。我们通过机器教学实验来强调反事实对现有作品的实用性,在这些实验中,我们教人类对不同的鸟类进行分类。我们还用零件和属性的词汇来补充我们的解释,这些零件和属性对系统的决定有所帮助。在此任务中,当使用相对于现有作品的反事实解释时,我们将获得最新的结果,从而增强了语义一致的解释的重要性。源代码可从https://github.com/facebookresearch/visual-counterfactuals获得。
translated by 谷歌翻译
我们最近开发了一种深入的学习方法,可以通过观察材料晶体的扫描电子显微镜(SEM)图像来确定材料的临界峰值应力。然而,它已经稍微不清楚网络在其预测时键入网络的图像特征。在计算机愿景中常见的是采用可解释的AI显着图,告诉一个图像的图像对网络的决定很重要。人们通常可以通过查看这些突出位置来推导重要的特征。然而,SEM的晶体图像比自然图像照片更摘要。结果,不容易判断在最突出的位置是什么重要的。为了解决这个问题,我们开发了一种方法,可以帮助我们将SEM图像中的重要位置从SEM图像中的重要位置映射到更易于解释的非抽象纹理。
translated by 谷歌翻译
细粒度的图像分析(FGIA)是计算机视觉和模式识别中的长期和基本问题,并为一组多种现实世界应用提供了基础。 FGIA的任务是从属类别分析视觉物体,例如汽车或汽车型号的种类。细粒度分析中固有的小阶级和阶级阶级内变异使其成为一个具有挑战性的问题。利用深度学习的进步,近年来,我们在深入学习动力的FGIA中见证了显着进展。在本文中,我们对这些进展的系统进行了系统的调查,我们试图通过巩固两个基本的细粒度研究领域 - 细粒度的图像识别和细粒度的图像检索来重新定义和扩大FGIA领域。此外,我们还审查了FGIA的其他关键问题,例如公开可用的基准数据集和相关域的特定于应用程序。我们通过突出几个研究方向和开放问题,从社区中突出了几个研究方向和开放问题。
translated by 谷歌翻译
互动对象理解,或者我们可以对对象做些什么以及计算机愿景的长期目标。在本文中,我们通过观察野外的自我高端视频的人类手来解决这个问题。我们展示了观察人类的手与之交互以及如何提供相关数据和必要的监督。参加双手,容易定位并稳定积极的物体以进行学习,并揭示发生与对象的交互的地方。分析手显示我们可以对物体做些什么以及如何做些。我们在史诗厨房数据集上应用这些基本原则,并成功地学习了国家敏感的特征,以及互动区域和提供了麦克拉斯的地区),纯粹是通过观察在EGoCentric视频中的手。
translated by 谷歌翻译
事实证明,无监督的表示学习方法在学习目标数据集的视觉语义方面有效。这些方法背后的主要思想是,同一图像的不同视图代表相同的语义。在本文中,我们进一步引入了一个附加模块,以促进对样品之间空间跨相关性的知识注入。反过来,这导致了类内部信息的提炼,包括特征级别的位置和同类实例之间的相似性。建议的附加组件可以添加到现有方法中,例如SWAV。稍后,我们可以删除用于推理的附加模块,而无需修改学识的权重。通过一系列广泛的经验评估,我们验证我们的方法在检测类激活图,TOP-1分类准确性和下游任务(例如对象检测)的情况下会提高性能,并具有不同的配置设置。
translated by 谷歌翻译
Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark (Krizhevsky et al., 2012). However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al. on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.
translated by 谷歌翻译
在许多现实世界中的高级应用程序中,解释人工智能(AI)模型的决策(AI)模型越来越重要。数以百计的论文提出了新功能归因方法,在其工作中讨论或利用这些工具。然而,尽管人类是目标最终用户,但大多数归因方法仅在代理自动评估指标上进行评估(Zhang等人,2018年; Zhou等人,2016年; Petsiuk等人,2018年)。在本文中,我们进行了首个用户研究,以衡量归因地图的有效性,以帮助人类进行成像网分类和斯坦福犬细粒分类,以及图像是自然或对抗性的(即包含对抗性扰动)。总体而言,特征归因比显示最近的训练集示例的人更有效。在一项艰巨的狗分类的艰巨任务中,向人类提供归因地图无济于事,而是与仅AI相比会损害人类团队的性能。重要的是,我们发现自动归因地图评估措施与实际人类AI团队的绩效较差。我们的发现鼓励社区严格测试其在下游人类应用应用程序上的方法,并重新考虑现有的评估指标。
translated by 谷歌翻译
神经网络可以从单个图像中了解视觉世界的内容是什么?虽然它显然不能包含存在的可能对象,场景和照明条件 - 在所有可能的256 ^(3x224x224)224尺寸的方形图像中,它仍然可以在自然图像之前提供强大的。为了分析这一假设,我们通过通过监控掠夺教师的知识蒸馏来制定一种训练神经网络的培训神经网络。有了这个,我们发现上述问题的答案是:“令人惊讶的是,很多”。在定量术语中,我们在CiFar-10/100上找到了94%/ 74%的前1个精度,在想象中,通过将这种方法扩展到音频,84%的语音组合。在广泛的分析中,我们解除了增强,源图像和网络架构的选择,以及在从未见过熊猫的网络中发现“熊猫神经元”。这项工作表明,一个图像可用于推断成千上万的对象类,并激励关于增强和图像的基本相互作用的更新的研究议程。
translated by 谷歌翻译
Vision Transformers (ViTs) have gained significant popularity in recent years and have proliferated into many applications. However, it is not well explored how varied their behavior is under different learning paradigms. We compare ViTs trained through different methods of supervision, and show that they learn a diverse range of behaviors in terms of their attention, representations, and downstream performance. We also discover ViT behaviors that are consistent across supervision, including the emergence of Offset Local Attention Heads. These are self-attention heads that attend to a token adjacent to the current token with a fixed directional offset, a phenomenon that to the best of our knowledge has not been highlighted in any prior work. Our analysis shows that ViTs are highly flexible and learn to process local and global information in different orders depending on their training method. We find that contrastive self-supervised methods learn features that are competitive with explicitly supervised features, and they can even be superior for part-level tasks. We also find that the representations of reconstruction-based models show non-trivial similarity to contrastive self-supervised models. Finally, we show how the "best" layer for a given task varies by both supervision method and task, further demonstrating the differing order of information processing in ViTs.
translated by 谷歌翻译
机器人社区已经开始严重依赖越来越逼真的3D模拟器,以便在大量数据上进行大规模培训机器人。但是,一旦机器人部署在现实世界中,仿真差距以及现实世界的变化(例如,灯,物体位移)导致错误。在本文中,我们介绍了SIM2Realviz,这是一种视觉分析工具,可以帮助专家了解并减少机器人EGO-POSE估计任务的这种差距,即使用训练型模型估计机器人的位置。 Sim2Realviz显示了给定模型的详细信息以及在模拟和现实世界中的实例的性能。专家可以识别在给定位置影响模型预测的环境差异,并通过与模型假设的直接交互来探索来解决它。我们详细介绍了工具的设计,以及与对平均偏差的回归利用以及如何解决的案例研究以及如何解决,以及模型如何被诸如自行车等地标的消失的扰动。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
在本文中,我们提出了DendroMap,这是一种新颖的方法,用于互动地探索用于机器学习的大规模图像数据集(ML)。 ML从业人员通常通过使用降低降低技术(例如T-SNE)生成图像的网格或将图像的高维表示分为2-D来探索图像数据集。但是,两种方法都没有有效地扩展到大型数据集,因为图像是无效组织的,并且相互作用不足。为了应对这些挑战,我们通过适应Treemaps(一种众所周知的可视化技术)来开发树突。树突图通过从图像的高维表示中提取层次群集结构来有效地组织图像。它使用户能够理解数据集的整体分布,并在多个抽象级别上进行交互放大到特定的兴趣领域。我们使用广泛使用的图像数据集进行深度学习的案例研究表明,用户可以通过检查图像的多样性,确定表现不佳的子组并分析分类错误,从而发现有关数据集和训练模型的见解。我们进行了一项用户研究,该研究通过将其与T-SNE的网状版本进行比较,评估了树突图在分组和搜索任务中的有效性,并发现参与者更喜欢DendroMap。 DendroMap可在https://div-lab.github.io/dendromap/上获得。
translated by 谷歌翻译
This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [21] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-theart performance among algorithms which use only Pascalprovided training set annotations.
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
translated by 谷歌翻译