视觉理解需要了解场景中对象之间的复杂视觉关系。在这里,我们寻求描述抽象视觉推理的计算需求。我们通过系统地评估现代深度卷积神经网络(CNNS)的能力来学习解决“综合视觉推理测试”(SVRT)挑战,是二十三个视觉推理问题的集合。我们的分析揭示了视觉推理任务的新型分类,这可以通过关系类型(相同的与空间关系判断)和用于构成基本规则的关系数量来解释。先前的认知神经科学工作表明,注意力在人类的视觉推理能力中发挥着关键作用。为了测试这一假设,我们将CNN扩展了基于空间和基于特征的注意力机制。在第二系列实验中,我们评估了这些注意网络学习解决SVRT挑战的能力,并发现所产生的架构在解决这些视觉推理任务中最艰难的架构。最重要的是,对个人任务的相应改进部分地解释了我们的新型分类法。总体而言,这项工作提供了视觉推理的粒度计算账户,并产生关于基于特征的与空间关注的差异需求的可测试神经科学预测,具体取决于视觉推理问题的类型。
translated by 谷歌翻译
人类在解析和灵活地理解复杂的视觉场景的能力方面继续大大胜过现代AI系统。注意力和记忆是已知的两个系统,它们在我们选择性地维护和操纵与行为相关的视觉信息的能力中起着至关重要的作用,以解决一些最具挑战性的视觉推理任务。在这里,我们介绍了一种新颖的体系结构,用于视觉推理的认知科学文献,基于记忆和注意力(视觉)推理(MAREO)架构。 Mareo实例化了一个主动视觉理论,该理论认为大脑通过学习结合以前学习的基本视觉操作以形成更复杂的视觉例程来在构成中解决复杂的视觉推理问题。 Mareo学会通过注意力转移序列来解决视觉推理任务,以路由并通过多头变压器模块将与任务相关的视觉信息保持在存储库中。然后,通过训练有素的专用推理模块来部署视觉例程,以判断场景中对象之间的各种关系。对四种推理任务的实验证明了Mareo以强大和样品有效的方式学习视觉例程的能力。
translated by 谷歌翻译
虽然卷积神经网络(CNNS)在许多愿景任务中显示出显着的结果,但它们仍然是通过简单但具有挑战性的视觉推理问题所紧张的。在计算机视觉中最近的变压器网络成功的启发,在本文中,我们介绍了经常性视觉变压器(RVIT)模型。由于经常性连接和空间注意在推理任务中的影响,该网络实现了来自SVRT数据集的同样不同视觉推理问题的竞争结果。空间和深度尺寸中的重量共享正规化模型,允许它使用较少的自由参数学习,仅使用28K培训样本。全面的消融研究证实了混合CNN +变压器架构的重要性和反馈连接的作用,其迭代地细化内部表示直到获得稳定的预测。最后,本研究可以更深入地了解对求解视觉抽象推理任务的注意力和经常性联系的作用。
translated by 谷歌翻译
对称性是本质上的无所话话,并且由许多物种的视觉系统感知,因为它有助于检测我们环境中的生态重要的物体类。对称感知需要抽象图像区域之间的非局部空间依赖性,并且其底层的神经机制仍然难以捉摸。在本文中,我们评估了深度神经网络(DNN)架构关于从示例学习对称感知的任务。我们证明了在对象识别任务上建模人类性能的前馈DNN,不能获取对称的一般概念。即使当DNN被重建以捕获非局部空间依赖项,例如通过`扩张的“卷曲和最近引入的”变压器“设计,也是如此。相比之下,我们发现经常性架构能够通过将非局部空间依赖性分解成一系列本地操作来学习对称性,这对于新颖的图像来说是可重复使用的。这些结果表明,经常性联系可能在人工系统中对称性感知中发挥重要作用,也可能是生物学的。
translated by 谷歌翻译
人类视野的一个基本组成部分是我们解析复杂的视觉场景并判断其组成物体之间的关系的能力。近年来,随着最先进的系统在其中一些基准上达到人类的准确性,近年来,视觉推理的AI基准驱动了快速进步。然而,就样本效率而言,人类和AI系统学习新的视觉推理任务的样本效率仍然存在。人类在学习方面的非凡效率至少部分归因于其利用组成性的能力,以便他们可以在学习新任务时有效利用先前获得的知识。在这里,我们介绍了一种新颖的视觉推理基准组成视觉关系(CVR),以推动发展更多数据有效学习算法的进步。我们从流体智能和非语言推理测试中汲取灵感,并描述一种新的方法,用于创建抽象规则和相关图像数据集的组成。我们提出的基准包括跨任务规则的样本效率,概括和转移的度量,以及利用组合性的能力。我们系统地评估现代神经体系结构,发现令人惊讶的是,在大多数数据制度中,卷积架构在所有性能指标中都超过了基于变压器的体系结构。但是,即使在使用自学意见书学习信息性的视觉表示之后,与人类相比,所有计算模型的数据效率要少得多。总体而言,我们希望我们的挑战能够激发人们对可以学会利用构图朝着更高效学习的神经体系结构发展的兴趣。
translated by 谷歌翻译
The many successes of deep neural networks (DNNs) over the past decade have largely been driven by computational scale rather than insights from biological intelligence. Here, we explore if these trends have also carried concomitant improvements in explaining the visual strategies humans rely on for object recognition. We do this by comparing two related but distinct properties of visual strategies in humans and DNNs: where they believe important visual features are in images and how they use those features to categorize objects. Across 84 different DNNs trained on ImageNet and three independent datasets measuring the where and the how of human visual strategies for object recognition on those images, we find a systematic trade-off between DNN categorization accuracy and alignment with human visual strategies for object recognition. State-of-the-art DNNs are progressively becoming less aligned with humans as their accuracy improves. We rectify this growing issue with our neural harmonizer: a general-purpose training routine that both aligns DNN and human visual strategies and improves categorization accuracy. Our work represents the first demonstration that the scaling laws that are guiding the design of DNNs today have also produced worse models of human vision. We release our code and data at https://serre-lab.github.io/Harmonization to help the field build more human-like DNNs.
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
我们提出了一种新颖的计算模型“ Savir-T”,用于在Raven的渐进式矩阵(RPM)中体现的视觉推理问题。我们的模型考虑了拼图中每个图像中视觉元素的显式空间语义,编码为时空视标,并了解内部图像以及图像的依赖依赖性依赖性,与视觉推理任务高度相关。通过基于变压器的SAVIR-T体系结构建模的令牌关系,提取组(行或列)通过利用组规则相干性并将其用作电感偏置来提取前两行中的基本规则表示形式,从而引起了提取组(行或列)驱动的表示形式(或列)RPM中的每个令牌。我们使用此关系表示形式来找到正确的选择图像,该图像完成了RPM的最后一行或列。在两个合成RPM基准测试中进行了广泛的实验,包括Raven,I-Raven,Raven-Fair和PGM以及基于自然图像的“ V-Prom”,这表明Savir-T为视觉设定了新的最新时间推理,超过了先前模型的性能。
translated by 谷歌翻译
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
translated by 谷歌翻译
当前的深度学习方法显示出良好的分布概括性能,但在分布外的概括方面挣扎。正如我们在许多智能测试中所发现的那样,在涉及抽象关系(例如识别序列中的规则)的任务中尤其如此。最近的工作探索了如何强迫关系表示与感觉表示的区别,这在大脑中似乎是这种情况,可以帮助人工系统。在这项工作的基础上,我们进一步探索并正式化了关系和感官细节的“分区”表示所提供的优势,以及这种归纳偏见如何帮助在新遇到的环境中重新组建学习的关系结构。我们介绍了一个基于相似性分数的简单体系结构,我们将其命名为组成关系网络(Corelnet)。使用此模型,我们研究了一系列的归纳偏见,以确保从感觉数据中学习并明显地了解抽象关系,并探索它们对一系列关系心理物理学任务的分布概括的影响。我们发现,简单的体系结构选择可以超越现有的模型,而在分布式概括中。总之,这些结果表明,从其他信息流中分配关系表示形式可能是一种简单的方法,可以在执行分布外的关系计算时增强现有网络体系结构的鲁棒性。
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
注意机制对研究界提出了重大兴趣,因为他们承诺改善神经网络架构的表现。但是,在任何特定的问题中,我们仍然缺乏主要的方法来选择导致保证改进的具体机制和超参数。最近,已经提出了自我关注并广泛用于变压器 - 类似的架构中,导致某些应用中的重大突破。在这项工作中,我们专注于两种形式的注意机制:注意模块和自我关注。注意模块用于重新重量每个层输入张量的特征。不同的模块具有不同的方法,可以在完全连接或卷积层中执行此重复。研究的注意力模型是完全模块化的,在这项工作中,它们将与流行的Reset架构一起使用。自我关注,最初在自然语言处理领域提出,可以将所有项目与输入序列中的所有项目相关联。自我关注在计算机视觉中越来越受欢迎,其中有时与卷积层相结合,尽管最近的一些架构与卷曲完全消失。在这项工作中,我们研究并执行了在特定计算机视觉任务中许多不同关注机制的客观的比较,在广泛使用的皮肤癌MNIST数据集中的样本分类。结果表明,关注模块有时会改善卷积神经网络架构的性能,也是这种改进虽然明显且统计学意义,但在不同的环境中并不一致。另一方面,通过自我关注机制获得的结果表明了一致和显着的改进,即使在具有减少数量的参数的架构中,也可以实现最佳结果。
translated by 谷歌翻译
海洋生态系统及其鱼类栖息地越来越重要,因为它们在提供有价值的食物来源和保护效果方面的重要作用。由于它们的偏僻且难以接近自然,因此通常使用水下摄像头对海洋环境和鱼类栖息地进行监测。这些相机产生了大量数字数据,这些数据无法通过当前的手动处理方法有效地分析,这些方法涉及人类观察者。 DL是一种尖端的AI技术,在分析视觉数据时表现出了前所未有的性能。尽管它应用于无数领域,但仍在探索其在水下鱼类栖息地监测中的使用。在本文中,我们提供了一个涵盖DL的关键概念的教程,该教程可帮助读者了解对DL的工作原理的高级理解。该教程还解释了一个逐步的程序,讲述了如何为诸如水下鱼类监测等挑战性应用开发DL算法。此外,我们还提供了针对鱼类栖息地监测的关键深度学习技术的全面调查,包括分类,计数,定位和细分。此外,我们对水下鱼类数据集进行了公开调查,并比较水下鱼类监测域中的各种DL技术。我们还讨论了鱼类栖息地加工深度学习的新兴领域的一些挑战和机遇。本文是为了作为希望掌握对DL的高级了解,通过遵循我们的分步教程而为其应用开发的海洋科学家的教程,并了解如何发展其研究,以促进他们的研究。努力。同时,它适用于希望调查基于DL的最先进方法的计算机科学家,以进行鱼类栖息地监测。
translated by 谷歌翻译
人们容易概括到新型域和刺激的知识。我们提出了一种在计算模型中实例化的理论,基于跨域人类中的跨域泛化是对结构化(即,象征性)关系表示的模拟推断的情况。该模型是LISA和关系推论和学习的DORA模型的延伸。生成的模型在没有监控的情况下,从非关系输入中的关系和格式(即结构)(即,结构)既与强化学习的容量增强,利用这些表示来学习单个域,然后向新域推广首先通过模拟推理(即零拍摄学习)。我们展示了模型从各种简单的视觉刺激学习结构化关系表示的能力,并在视频游戏(突破和乒乓球)和几个心理任务之间进行跨域泛化。我们展示了模型的轨迹在学到关系时,旨在让孩子的轨迹镜头紧密地镜子,从文学中占据了儿童推理和类比制作的文献中的现象。该模型在域之间的概括能力展示了在其基础关系结构方面代表域的灵活性,而不是简单地就其投入和产出之间的统计关系而言。
translated by 谷歌翻译
Chromosome analysis is essential for diagnosing genetic disorders. For hematologic malignancies, identification of somatic clonal aberrations by karyotype analysis remains the standard of care. However, karyotyping is costly and time-consuming because of the largely manual process and the expertise required in identifying and annotating aberrations. Efforts to automate karyotype analysis to date fell short in aberration detection. Using a training set of ~10k patient specimens and ~50k karyograms from over 5 years from the Fred Hutchinson Cancer Center, we created a labeled set of images representing individual chromosomes. These individual chromosomes were used to train and assess deep learning models for classifying the 24 human chromosomes and identifying chromosomal aberrations. The top-accuracy models utilized the recently introduced Topological Vision Transformers (TopViTs) with 2-level-block-Toeplitz masking, to incorporate structural inductive bias. TopViT outperformed CNN (Inception) models with >99.3% accuracy for chromosome identification, and exhibited accuracies >99% for aberration detection in most aberrations. Notably, we were able to show high-quality performance even in "few shot" learning scenarios. Incorporating the definition of clonality substantially improved both precision and recall (sensitivity). When applied to "zero shot" scenarios, the model captured aberrations without training, with perfect precision at >50% recall. Together these results show that modern deep learning models can approach expert-level performance for chromosome aberration detection. To our knowledge, this is the first study demonstrating the downstream effectiveness of TopViTs. These results open up exciting opportunities for not only expediting patient results but providing a scalable technology for early screening of low-abundance chromosomal lesions.
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. 1
translated by 谷歌翻译
Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
哥内克人Sentinel Imagery的纯粹卷的可用性为使用深度学习的大尺度创造了新的土地利用陆地覆盖(Lulc)映射的机会。虽然在这种大型数据集上培训是一个非琐碎的任务。在这项工作中,我们试验Lulc Image分类和基准不同最先进模型的Bigearthnet数据集,包括卷积神经网络,多层感知,视觉变压器,高效导通和宽残余网络(WRN)架构。我们的目标是利用分类准确性,培训时间和推理率。我们提出了一种基于用于网络深度,宽度和输入数据分辨率的WRNS复合缩放的高效导通的框架,以有效地训练和测试不同的模型设置。我们设计一种新颖的缩放WRN架构,增强了有效的通道注意力机制。我们提出的轻量级模型具有较小的培训参数,实现所有19个LULC类的平均F分类准确度达到4.5%,并且验证了我们使用的resnet50最先进的模型速度快两倍作为基线。我们提供超过50种培训的型号,以及我们在多个GPU节点上分布式培训的代码。
translated by 谷歌翻译