Geospatial Information Systems are used by researchers and Humanitarian Assistance and Disaster Response (HADR) practitioners to support a wide variety of important applications. However, collaboration between these actors is difficult due to the heterogeneous nature of geospatial data modalities (e.g., multi-spectral images of various resolutions, timeseries, weather data) and diversity of tasks (e.g., regression of human activity indicators or detecting forest fires). In this work, we present a roadmap towards the construction of a general-purpose neural architecture (GPNA) with a geospatial inductive bias, pre-trained on large amounts of unlabelled earth observation data in a self-supervised manner. We envision how such a model may facilitate cooperation between members of the community. We show preliminary results on the first step of the roadmap, where we instantiate an architecture that can process a wide variety of geospatial data modalities and demonstrate that it can achieve competitive performance with domain-specific architectures on tasks relating to the U.N.'s Sustainable Development Goals.
translated by 谷歌翻译
大型视力模型的无监督预训练方法已显示出可以提高下游监督任务的性能。为卫星图像开发类似的技术带来了重要的机会,因为未标记的数据很丰富,并且固有的时间和多光谱结构提供了途径,以进一步改善现有的训练策略。在本文中,我们提出了Satmae,这是基于蒙面自动编码器(MAE)的时间或多光谱卫星图像的预训练框架。为了利用时间信息,我们包括一个时间嵌入以及跨时间独立掩盖图像贴片。此外,我们证明将多光谱数据编码为具有不同光谱位置编码的频段组是有益的。我们的方法在基准数据集(最高$ \ uparrow $ 7 \%)上的监督学习绩效方面都对先前最先前的技术产生了强大的改进,以及在下游遥感任务(包括土地)上的转移学习绩效封面分类(最多$ \ uparrow $ 14 \%)和语义细分。
translated by 谷歌翻译
视觉变压器正在成为解决计算机视觉问题的强大工具。最近的技术还证明了超出图像域之外的变压器来解决许多与视频相关的任务的功效。其中,由于其广泛的应用,人类的行动识别是从研究界受到特别关注。本文提供了对动作识别的视觉变压器技术的首次全面调查。我们朝着这个方向分析并总结了现有文献和新兴文献,同时突出了适应变形金刚以进行动作识别的流行趋势。由于其专业应用,我们将这些方法统称为``动作变压器''。我们的文献综述根据其架构,方式和预期目标为动作变压器提供了适当的分类法。在动作变压器的背景下,我们探讨了编码时空数据,降低维度降低,框架贴片和时空立方体构造以及各种表示方法的技术。我们还研究了变压器层中时空注意的优化,以处理更长的序列,通常通过减少单个注意操作中的令牌数量。此外,我们还研究了不同的网络学习策略,例如自我监督和零局学习,以及它们对基于变压器的行动识别的相关损失。这项调查还总结了在具有动作变压器重要基准的评估度量评分方面取得的进步。最后,它提供了有关该研究方向的挑战,前景和未来途径的讨论。
translated by 谷歌翻译
Biological systems perceive the world by simultaneously processing high-dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domainspecific assumptions such as the local grid structures exploited by virtually all existing vision models. These priors introduce helpful inductive biases, but also lock models to individual modalities. In this paper we introduce the Perceiver -a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. The model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. We show that this architecture is competitive with or outperforms strong, specialized models on classification tasks across various modalities: images, point clouds, audio, video, and video+audio. The Perceiver obtains performance comparable to ResNet-50 and ViT on ImageNet without 2D convolutions by directly attending to 50,000 pixels. It is also competitive in all modalities in AudioSet.
translated by 谷歌翻译
在深度学习研究中,自学学习(SSL)引起了极大的关注,引起了计算机视觉和遥感社区的兴趣。尽管计算机视觉取得了很大的成功,但SSL在地球观测领域的大部分潜力仍然锁定。在本文中,我们对在遥感的背景下为计算机视觉的SSL概念和最新发展提供了介绍,并回顾了SSL中的概念和最新发展。此外,我们在流行的遥感数据集上提供了现代SSL算法的初步基准,从而验证了SSL在遥感中的潜力,并提供了有关数据增强的扩展研究。最后,我们确定了SSL未来研究的有希望的方向的地球观察(SSL4EO),以铺平了两个领域的富有成效的相互作用。
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
General perception systems such as Perceivers can process arbitrary modalities in any combination and are able to handle up to a few hundred thousand inputs. They achieve this generality by using exclusively global attention operations. This however hinders them from scaling up to the inputs sizes required to process raw high-resolution images or video. In this paper, we show that some degree of locality can be introduced back into these models, greatly improving their efficiency while preserving their generality. To scale them further, we introduce a self-supervised approach that enables learning dense low-dimensional positional embeddings for very large signals. We call the resulting model a Hierarchical Perceiver (HiP). In sum our contributions are: 1) scaling Perceiver-type models to raw high-resolution images and audio+video, 2) showing the feasibility of learning 1M+ positional embeddings from scratch using masked auto-encoding, 3) demonstrating competitive performance on raw data from ImageNet, AudioSet, PASCAL VOC, ModelNet40 and Kinetics datasets with the same exact, unchanged model and without specialized preprocessing or any tokenization.
translated by 谷歌翻译
Remote sensing images are useful for a wide variety of environmental and earth monitoring tasks, including tracking deforestation, illegal fishing, urban expansion, and natural disasters. The earth is extremely diverse -- the amount of potential tasks in remote sensing images is massive, and the sizes of features range from several kilometers to just tens of centimeters. However, creating generalizable computer vision methods is a challenge in part due to the lack of a large-scale dataset that captures these diverse features for many tasks. In this paper, we present Satlas, a remote sensing dataset and benchmark that is large in both breadth, featuring all of the aforementioned applications and more, as well as scale, comprising 290M labels under 137 categories and seven label modalities. We evaluate eight baselines and a proposed method on Satlas, and find that there is substantial room for improvement in addressing research challenges specific to remote sensing, including processing image time series that consist of images from very different types of sensors, and taking advantage of long-range spatial context. We also find that pre-training on Satlas substantially improves performance on downstream tasks with few labeled examples, increasing average accuracy by 16% over ImageNet and 5% over the next best baseline.
translated by 谷歌翻译
Many real-world problems are inherently multimodal, from the communicative modalities humans use to express social and emotional states to the force, proprioception, and visual sensors ubiquitous on robots. While there has been an explosion of interest in multimodal representation learning, these methods are still largely focused on a small set of modalities, primarily in the language, vision, and audio space. In order to accelerate generalization towards diverse and understudied modalities, this paper studies efficient representation learning for high-modality scenarios. Since adding new models for every new modality or task becomes prohibitively expensive, a critical technical challenge is heterogeneity quantification: how can we measure which modalities encode similar information and interactions in order to permit parameter sharing with previous modalities? We propose two new information-theoretic metrics for heterogeneity quantification: (1) modality heterogeneity studies how similar 2 modalities $\{X_1,X_2\}$ are by measuring how much information can be transferred from $X_1$ to $X_2$, while (2) interaction heterogeneity studies how similarly pairs of modalities $\{X_1,X_2\}, \{X_3,X_4\}$ interact by measuring how much interaction information can be transferred from $\{X_1,X_2\}$ to $\{X_3,X_4\}$. We show the importance of these proposed metrics in high-modality scenarios as a way to automatically prioritize the fusion of modalities that contain unique information or interactions. The result is a single model, HighMMT, that scales up to $10$ modalities and $15$ tasks from $5$ different research areas. Not only does HighMMT outperform prior methods on the tradeoff between performance and efficiency, it also demonstrates a crucial scaling behavior: performance continues to improve with each modality added, and transfers to entirely new modalities and tasks during fine-tuning.
translated by 谷歌翻译
在过去的十年中,基于深度学习的算法在遥感图像分析的不同领域中广泛流行。最近,最初在自然语言处理中引入的基于变形金刚的体系结构遍布计算机视觉领域,在该字段中,自我发挥的机制已被用作替代流行的卷积操作员来捕获长期依赖性。受到计算机视觉的最新进展的启发,遥感社区还见证了对各种任务的视觉变压器的探索。尽管许多调查都集中在计算机视觉中的变压器上,但据我们所知,我们是第一个对基于遥感中变压器的最新进展进行系统评价的人。我们的调查涵盖了60多种基于变形金刚的60多种方法,用于遥感子方面的不同遥感问题:非常高分辨率(VHR),高光谱(HSI)和合成孔径雷达(SAR)图像。我们通过讨论遥感中变压器的不同挑战和开放问题来结束调查。此外,我们打算在遥感论文中频繁更新和维护最新的变压器,及其各自的代码:https://github.com/virobo-15/transformer-in-in-remote-sensing
translated by 谷歌翻译
哥内克人Sentinel Imagery的纯粹卷的可用性为使用深度学习的大尺度创造了新的土地利用陆地覆盖(Lulc)映射的机会。虽然在这种大型数据集上培训是一个非琐碎的任务。在这项工作中,我们试验Lulc Image分类和基准不同最先进模型的Bigearthnet数据集,包括卷积神经网络,多层感知,视觉变压器,高效导通和宽残余网络(WRN)架构。我们的目标是利用分类准确性,培训时间和推理率。我们提出了一种基于用于网络深度,宽度和输入数据分辨率的WRNS复合缩放的高效导通的框架,以有效地训练和测试不同的模型设置。我们设计一种新颖的缩放WRN架构,增强了有效的通道注意力机制。我们提出的轻量级模型具有较小的培训参数,实现所有19个LULC类的平均F分类准确度达到4.5%,并且验证了我们使用的resnet50最先进的模型速度快两倍作为基线。我们提供超过50种培训的型号,以及我们在多个GPU节点上分布式培训的代码。
translated by 谷歌翻译
对联合国可持续发展目标的进展(SDGS)因关键环境和社会经济指标缺乏数据而受到阻碍,其中历史上有稀疏时间和空间覆盖率的地面调查。机器学习的最新进展使得可以利用丰富,频繁更新和全球可用的数据,例如卫星或社交媒体,以向SDGS提供洞察力。尽管有希望的早期结果,但到目前为止使用此类SDG测量数据的方法在很大程度上在不同的数据集或使用不一致的评估指标上进行了评估,使得难以理解的性能是改善,并且额外研究将是最丰富的。此外,处理卫星和地面调查数据需要域知识,其中许多机器学习群落缺乏。在本文中,我们介绍了3个SDG的3个基准任务的集合,包括与经济发展,农业,健康,教育,水和卫生,气候行动和陆地生命相关的任务。 15个任务中的11个数据集首次公开发布。我们为Acceptandbench的目标是(1)降低机器学习界的进入的障碍,以促进衡量和实现SDGS; (2)提供标准基准,用于评估各种SDG的任务的机器学习模型; (3)鼓励开发新颖的机器学习方法,改进的模型性能促进了对SDG的进展。
translated by 谷歌翻译
变压器是一种基于关注的编码器解码器架构,彻底改变了自然语言处理领域。灵感来自这一重大成就,最近在将变形式架构调整到计算机视觉(CV)领域的一些开创性作品,这已经证明了他们对各种简历任务的有效性。依靠竞争力的建模能力,与现代卷积神经网络相比在本文中,我们已经为三百不同的视觉变压器进行了全面的审查,用于三个基本的CV任务(分类,检测和分割),提出了根据其动机,结构和使用情况组织这些方法的分类。 。由于培训设置和面向任务的差异,我们还在不同的配置上进行了评估了这些方法,以便于易于和直观的比较而不是各种基准。此外,我们已经揭示了一系列必不可少的,但可能使变压器能够从众多架构中脱颖而出,例如松弛的高级语义嵌入,以弥合视觉和顺序变压器之间的差距。最后,提出了三个未来的未来研究方向进行进一步投资。
translated by 谷歌翻译
深度学习模式和地球观察的协同组合承诺支持可持续发展目标(SDGS)。新的发展和夸张的申请已经在改变人类将面临生活星球挑战的方式。本文审查了当前对地球观测数据的最深入学习方法,以及其在地球观测中深度学习的快速发展受到影响和实现最严重的SDG的应用。我们系统地审查案例研究至1)实现零饥饿,2)可持续城市,3)提供保管安全,4)减轻和适应气候变化,5)保留生物多样性。关注重要的社会,经济和环境影响。提前令人兴奋的时期即将到来,算法和地球数据可以帮助我们努力解决气候危机并支持更可持续发展的地方。
translated by 谷歌翻译
来自视频数据的多模态学习最近看过,因为它允许在没有人为注释的情况下培训语义有意义的嵌入,从而使得零射击检索和分类等任务。在这项工作中,我们提出了一种多模态,模态无政府主义融合变压器方法,它学会在多个模态之间交换信息,例如视频,音频和文本,并将它们集成到加入的多模态表示中,以获取聚合的嵌入多模态时间信息。我们建议培训系统的组合丢失,单个模态以及成对的方式,明确地留出任何附加组件,如位置或模态编码。在测试时间时,产生的模型可以处理和融合任意数量的输入模态。此外,变压器的隐式属性允许处理不同长度的输入。为了评估所提出的方法,我们在大规模HOWASET上培训模型,并评估四个具有挑战性的基准数据集上产生的嵌入空间获得最先进的视频检索和零射击视频动作定位。
translated by 谷歌翻译
Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced with the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey we analyze main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled as input-level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity.
translated by 谷歌翻译
我们使用无卷积的变压器架构提出了一种从未标记数据学习多式式表示的框架。具体而言,我们的视频音频文本变压器(Vatt)将原始信号作为输入提取,提取丰富的多式化表示,以使各种下游任务受益。我们使用多模式对比损失从头划线训练Vatt端到端,并通过视频动作识别,音频事件分类,图像分类和文本到视频检索的下游任务评估其性能。此外,我们通过共享三种方式之间的重量来研究模型 - 无话的单骨架变压器。我们表明,无卷积VATT优于下游任务中的最先进的Convnet架构。特别是,Vatt的视觉变压器在动力学-400上实现82.1%的高精度82.1%,在动力学-600,72.7%的动力学-700上的72.7%,以及时间的时间,新的记录,在避免受监督的预训练时,新的记录。通过从头划伤训练相同的变压器,转移到图像分类导致图像分类导致78.7%的ImageNet精度为64.7%,尽管视频和图像之间的域间差距,我们的模型概括了我们的模型。 Vatt的音雅音频变压器还通过在没有任何监督的预训练的情况下在Audioset上实现39.4%的地图来设置基于波形的音频事件识别的新记录。 Vatt的源代码是公开的。
translated by 谷歌翻译
大规模的农作物类型分类是遥感工作的核心,具有经济和生态重要性的应用。当前的最新深度学习方法基于自我注意事项,并使用卫星图像时间序列(SITS)根据其独特的生长模式来区分作物类型。但是,现有方法概括地概括了训练期间未见的区域,这主要是因为由于气候变化而导致生长季节的时间变化不健全。为此,我们建议针对基于注意的农作物分类器的热位置编码(TPE)。与以前的位置编码基于日历时间(例如年度)不同,TPE是基于热时间,这是通过在整个生长季节积累每日平均温度来获得的。由于农作物的生长与热时间直接相关,但与日历时间无关,因此TPE解决了不同区域之间的时间变化以改善概括。我们提出了多种TPE策略,包括可学习的方法,以进一步改善与常见的固定位置编码相比。我们证明了我们在四个不同欧洲地区的农作物分类任务上的方法,在那里我们获得了最新的概括结果。
translated by 谷歌翻译
远程感知的地理空间数据对于包括精确农业,城市规划,灾害监测和反应以及气候变化研究等应用至关重要。对于在类似的计算机视觉任务中的深度神经网络的成功和可用的远程感测图像的纯粹体积的情况下,深入学习方法尤为前接受了许多遥感任务。然而,数据收集方法的方差和地理空间元数据的处理使得深度学习方法的应用成为远程感测的数据不动性。例如,卫星图像通常包括超出红色,绿色和蓝色的额外光谱频带,并且必须连接到可以具有不同坐标系,界限和分辨率的其他地理空间数据源。为了帮助实现遥感应用的深度学习的潜力,我们介绍了一个Pythono库的Torchgeo,用于将地理空间数据集成到Pytorch深度学习生态系统中。 Torchgeo为各种基准数据集,用于通用地理空间数据源的可组合数据集,用于地理空间数据的采样器以及使用多光谱图像的转换的数据加载器。 Torchgeo也是第一个为多光谱卫星图像提供预先训练的模型的库(例如,使用Sentinel 2卫星的所有频段的模型),允许在下游遥感任务上传输学习,其中包含有限的标记数据。我们使用Torchgeo在现有数据集上创建可重复的基准结果,并将我们的建议方法用于直通预处理地理空间图像。 Torchgeo是开源的,可在GitHub上提供:https://github.com/microsoft/torchgeo。
translated by 谷歌翻译
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.
translated by 谷歌翻译