Semantic Change Detection (SCD) refers to the task of simultaneously extracting the changed areas and the semantic categories (before and after the changes) in Remote Sensing Images (RSIs). This is more meaningful than Binary Change Detection (BCD) since it enables detailed change analysis in the observed areas. Previous works established triple-branch Convolutional Neural Network (CNN) architectures as the paradigm for SCD. However, it remains challenging to exploit semantic information with a limited amount of change samples. In this work, we investigate to jointly consider the spatio-temporal dependencies to improve the accuracy of SCD. First, we propose a SCanFormer (Semantic Change Transformer) to explicitly model the 'from-to' semantic transitions between the bi-temporal RSIs. Then, we introduce a semantic learning scheme to leverage the spatio-temporal constraints, which are coherent to the SCD task, to guide the learning of semantic changes. The resulting network (ScanNet) significantly outperforms the baseline method in terms of both detection of critical semantic changes and semantic consistency in the obtained bi-temporal results. It achieves the SOTA accuracy on two benchmark datasets for the SCD.
translated by 谷歌翻译
语义变化检测(SCD)扩展了多级变化检测(MCD)任务,不仅提供了更改位置,而且提供了观察间隔之前和之后的详细覆盖/土地使用(LCLU)类别。这种细粒度的语义变更信息在许多应用中非常有用。最近的研究表明,SCD可以通过三分支卷积神经网络(CNN)进行建模,其包含两个时间分支和变化分支。然而,在这种架构中,时间分支和改变分支之间的通信不足。为了克服现有方法中的限制,我们提出了一种用于SCD的新型CNN架构,其中语义时间特征在深CD单元中合并。此外,我们详细说明了这种架构,以推理双颞态语义相关性。由此产生的双时话语义推理网络(BI-SRNET)包含两种类型的语义推理块,以推理单时段和跨时话语义相关性,以及提高改变变化检测结果的语义一致性的新型损失功能。基准数据集上的实验结果表明,该架构对现有方法获得了显着的准确性改进,而Bi-SRNET中的添加设计则进一步提高了语义类别和改变区域的分割。本文的代码可访问:github.com/gnsding/bi-srnet。
translated by 谷歌翻译
由于遮挡和边界歧义问题,VHR RSIS中的建筑提取仍然是一个具有挑战性的任务。虽然传统的卷积神经网络(CNNS)的方法能够利用局部纹理和上下文信息,但它们无法捕获建筑物的形状模式,这是人类识别的必要约束。为了解决这个问题,我们提出了一个对抗性形状学习网络(ASLNET)来模拟构建形状模式,提高构建分割的准确性。在提议的ASLNET中,我们介绍了对抗性学习策略,以明确地模拟形状约束,以及CNN形式规范器,以加强形状特征的嵌入。为了评估构建分割结果的几何准确性,我们介绍了几种基于对象的质量评估度量。两个开放基准数据集的实验表明,所提出的ASLNET通过大边距提高了基于像素的准确性和基于对象的质量测量。该代码可在:https://github.com/gnsding/aslnet
translated by 谷歌翻译
在过去的十年中,基于深度学习的算法在遥感图像分析的不同领域中广泛流行。最近,最初在自然语言处理中引入的基于变形金刚的体系结构遍布计算机视觉领域,在该字段中,自我发挥的机制已被用作替代流行的卷积操作员来捕获长期依赖性。受到计算机视觉的最新进展的启发,遥感社区还见证了对各种任务的视觉变压器的探索。尽管许多调查都集中在计算机视觉中的变压器上,但据我们所知,我们是第一个对基于遥感中变压器的最新进展进行系统评价的人。我们的调查涵盖了60多种基于变形金刚的60多种方法,用于遥感子方面的不同遥感问题:非常高分辨率(VHR),高光谱(HSI)和合成孔径雷达(SAR)图像。我们通过讨论遥感中变压器的不同挑战和开放问题来结束调查。此外,我们打算在遥感论文中频繁更新和维护最新的变压器,及其各自的代码:https://github.com/virobo-15/transformer-in-in-remote-sensing
translated by 谷歌翻译
使用遥感图像进行建筑检测和变更检测可以帮助城市和救援计划。此外,它们可用于自然灾害后的建筑损害评估。当前,大多数用于建筑物检测的现有模型仅使用一个图像(预拆架图像)来检测建筑物。这是基于这样的想法:由于存在被破坏的建筑物,后沙仪图像降低了模型的性能。在本文中,我们提出了一种称为暹罗形式的暹罗模型,该模型使用前和垃圾后图像作为输入。我们的模型有两个编码器,并具有分层变压器体系结构。两个编码器中每个阶段的输出都以特征融合的方式给予特征融合,以从disasaster图像生成查询,并且(键,值)是从disasaster图像中生成的。为此,在特征融合中也考虑了时间特征。在特征融合中使用颞变压器的另一个优点是,与CNN相比,它们可以更好地维持由变压器编码器产生的大型接受场。最后,在每个阶段,将颞变压器的输出输入简单的MLP解码器。在XBD和WHU数据集上评估了暹罗形式模型,用于构建检测以及Levir-CD和CDD数据集,以进行更改检测,并可以胜过最新的。
translated by 谷歌翻译
Deep learning based change detection methods have received wide attentoion, thanks to their strong capability in obtaining rich features from images. However, existing AI-based CD methods largely rely on three functionality-enhancing modules, i.e., semantic enhancement, attention mechanisms, and correspondence enhancement. The stacking of these modules leads to great model complexity. To unify these three modules into a simple pipeline, we introduce Relational Change Detection Transformer (RCDT), a novel and simple framework for remote sensing change detection tasks. The proposed RCDT consists of three major components, a weight-sharing Siamese Backbone to obtain bi-temporal features, a Relational Cross Attention Module (RCAM) that implements offset cross attention to obtain bi-temporal relation-aware features, and a Features Constrain Module (FCM) to achieve the final refined predictions with high-resolution constraints. Extensive experiments on four different publically available datasets suggest that our proposed RCDT exhibits superior change detection performance compared with other competing methods. The therotical, methodogical, and experimental knowledge of this study is expected to benefit future change detection efforts that involve the cross attention mechanism.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
Change detection (CD) aims to detect change regions within an image pair captured at different times, playing a significant role in diverse real-world applications. Nevertheless, most of the existing works focus on designing advanced network architectures to map the feature difference to the final change map while ignoring the influence of the quality of the feature difference. In this paper, we study the CD from a different perspective, i.e., how to optimize the feature difference to highlight changes and suppress unchanged regions, and propose a novel module denoted as iterative difference-enhanced transformers (IDET). IDET contains three transformers: two transformers for extracting the long-range information of the two images and one transformer for enhancing the feature difference. In contrast to the previous transformers, the third transformer takes the outputs of the first two transformers to guide the enhancement of the feature difference iteratively. To achieve more effective refinement, we further propose the multi-scale IDET-based change detection that uses multi-scale representations of the images for multiple feature difference refinements and proposes a coarse-to-fine fusion strategy to combine all refinements. Our final CD method outperforms seven state-of-the-art methods on six large-scale datasets under diverse application scenarios, which demonstrates the importance of feature difference enhancements and the effectiveness of IDET.
translated by 谷歌翻译
对医学图像的器官或病变的准确分割对于可靠的疾病和器官形态计量学的可靠诊断至关重要。近年来,卷积编码器解码器解决方案在自动医疗图像分割领域取得了重大进展。由于卷积操作中的固有偏见,先前的模型主要集中在相邻像素形成的局部视觉提示上,但无法完全对远程上下文依赖性进行建模。在本文中,我们提出了一个新型的基于变压器的注意力指导网络,称为Transattunet,其中多层引导注意力和多尺度跳过连接旨在共同增强语义分割体系结构的性能。受到变压器的启发,具有变压器自我注意力(TSA)和全球空间注意力(GSA)的自我意识注意(SAA)被纳入Transattunet中,以有效地学习编码器特征之间的非本地相互作用。此外,我们还使用解码器块之间的其他多尺度跳过连接来汇总具有不同语义尺度的上采样功能。这样,多尺度上下文信息的表示能力就可以增强以产生判别特征。从这些互补组件中受益,拟议的Transattunet可以有效地减轻卷积层堆叠和连续采样操作引起的细节损失,最终提高医学图像的细分质量。来自不同成像方式的多个医疗图像分割数据集进行了广泛的实验表明,所提出的方法始终优于最先进的基线。我们的代码和预培训模型可在以下网址找到:https://github.com/yishuliu/transattunet。
translated by 谷歌翻译
更改检测的目的(CD)是通过比较在不同时间拍摄的两张图像来检测变化。 CD的挑战性部分是跟踪用户想要突出显示的变化,例如新建筑物,并忽略了由于外部因素(例如环境,照明条件,雾或季节性变化)而引起的变化。深度学习领域的最新发展使研究人员能够在这一领域取得出色的表现。特别是,时空注意的不同机制允许利用从模型中提取的空间特征,并通过利用这两个可用图像来以时间方式将它们相关联。不利的一面是,这些模型已经变得越来越复杂且大,对于边缘应用来说通常是不可行的。当必须将模型应用于工业领域或需要实时性能的应用程序时,这些都是限制。在这项工作中,我们提出了一个名为TinyCD的新型模型,证明既轻量级又有效,能够实现较少参数13-150x的最新技术状态。在我们的方法中,我们利用了低级功能比较图像的重要性。为此,我们仅使用几个骨干块。此策略使我们能够保持网络参数的数量较低。为了构成从这两个图像中提取的特征,我们在参数方面引入了一种新颖的经济性,混合块能够在时空和时域中交叉相关的特征。最后,为了充分利用计算功能中包含的信息,我们定义了能够执行像素明智分类的PW-MLP块。源代码,模型和结果可在此处找到:https://github.com/andreacodegoni/tiny_model_4_cd
translated by 谷歌翻译
本文介绍了Dahitra,这是一种具有分层变压器的新型深度学习模型,可在飓风后根据卫星图像对建筑物的损害进行分类。自动化的建筑损害评估为决策和资源分配提供了关键信息,以快速应急响应。卫星图像提供了实时,高覆盖的信息,并提供了向大规模污点后建筑物损失评估提供信息的机会。此外,深入学习方法已证明在对建筑物的损害进行分类方面有希望。在这项工作中,提出了一个基于变压器的新型网络来评估建筑物的损失。该网络利用多个分辨率的层次空间特征,并在将变压器编码器应用于空间特征后捕获特征域的时间差异。当对大规模灾难损坏数据集(XBD)进行测试以构建本地化和损坏分类以及在Levir-CD数据集上进行更改检测任务时,该网络将实现最先进的绩效。此外,我们引入了一个新的高分辨率卫星图像数据集,IDA-BD(与2021年路易斯安那州的2021年飓风IDA有关,以便域名适应以进一步评估该模型的能力,以适用于新损坏的区域。域的适应结果表明,所提出的模型可以适应一个新事件,只有有限的微调。因此,所提出的模型通过更好的性能和域的适应来推进艺术的当前状态。此外,IDA-BD也提供了A高分辨率注释的数据集用于该领域的未来研究。
translated by 谷歌翻译
卷积神经网络(CNN)已成为医疗图像分割任务的共识。但是,由于卷积操作的性质,它们在建模长期依赖性和空间相关性时受到限制。尽管最初开发了变压器来解决这个问题,但它们未能捕获低级功能。相比之下,证明本地和全球特征对于密集的预测至关重要,例如在具有挑战性的环境中细分。在本文中,我们提出了一种新型方法,该方法有效地桥接了CNN和用于医学图像分割的变压器。具体而言,我们使用开创性SWIN变压器模块和一个基于CNN的编码器设计两个多尺度特征表示。为了确保从上述两个表示获得的全局和局部特征的精细融合,我们建议在编码器编码器结构的跳过连接中提出一个双层融合(DLF)模块。在各种医学图像分割数据集上进行的广泛实验证明了Hiformer在计算复杂性以及定量和定性结果方面对其他基于CNN的,基于变压器和混合方法的有效性。我们的代码可在以下网址公开获取:https://github.com/amirhossein-kz/hiformer
translated by 谷歌翻译
大型视觉基础模型在自然图像上的视觉任务上取得了重大进展,在这种情况下,视觉变压器是其良好可扩展性和表示能力的主要选择。但是,在现有模型仍处于小规模的情况下,遥感社区(RS)社区中大型模型的利用仍然不足,从而限制了性能。在本文中,我们使用约1亿个参数求助于普通视觉变压器,并首次尝试提出针对RS任务定制的大型视觉模型,并探索如此大型模型的性能。具体而言,要处理RS图像中各种取向的较大图像大小和对象,我们提出了一个新的旋转型尺寸的窗户注意力,以替代变形金刚中的原始关注,这可以大大降低计算成本和内存足迹,同时学习更好的对象通过从生成的不同窗口中提取丰富上下文来表示。关于检测任务的实验证明了我们模型的优越性,超过了所有最新模型,在DOTA-V1.0数据集上实现了81.16 \%地图。与现有的高级方法相比,我们在下游分类和细分任务上的模型结果也证明了竞争性能。进一步的实验显示了我们模型对计算复杂性和几乎没有学习的优势。代码和模型将在https://github.com/vitae-transformer/remote-sensing-rvsa上发布
translated by 谷歌翻译
监督的深度学习模型取决于大量标记的数据。不幸的是,收集和注释包含所需更改的零花态样本是耗时和劳动密集型的。从预训练模型中转移学习可有效减轻遥感(RS)变化检测(CD)中标签不足。我们探索在预训练期间使用语义信息的使用。不同于传统的监督预训练,该预训练从图像到标签,我们将语义监督纳入了自我监督的学习(SSL)框架中。通常,多个感兴趣的对象(例如,建筑物)以未经切割的RS图像分布在各个位置。我们没有通过全局池操纵图像级表示,而是在每个像素嵌入式上引入点级监督以学习空间敏感的特征,从而使下游密集的CD受益。为了实现这一目标,我们通过使用语义掩码在视图之间的重叠区域上通过类平衡的采样获得了多个点。我们学会了一个嵌入式空间,将背景和前景点分开,并将视图之间的空间对齐点齐聚在一起。我们的直觉是导致的语义歧视性表示与无关的变化不变(照明和无关紧要的土地覆盖)可能有助于改变识别。我们在RS社区中免费提供大规模的图像面罩,用于预训练。在三个CD数据集上进行的大量实验验证了我们方法的有效性。我们的表现明显优于Imagenet预训练,内域监督和几种SSL方法。经验结果表明我们的预训练提高了CD模型的概括和数据效率。值得注意的是,我们使用20%的培训数据获得了比基线(随机初始化)使用100%数据获得竞争结果。我们的代码可用。
translated by 谷歌翻译
Camouflaged object detection (COD) aims to detect/segment camouflaged objects embedded in the environment, which has attracted increasing attention over the past decades. Although several COD methods have been developed, they still suffer from unsatisfactory performance due to the intrinsic similarities between the foreground objects and background surroundings. In this paper, we propose a novel Feature Aggregation and Propagation Network (FAP-Net) for camouflaged object detection. Specifically, we propose a Boundary Guidance Module (BGM) to explicitly model the boundary characteristic, which can provide boundary-enhanced features to boost the COD performance. To capture the scale variations of the camouflaged objects, we propose a Multi-scale Feature Aggregation Module (MFAM) to characterize the multi-scale information from each layer and obtain the aggregated feature representations. Furthermore, we propose a Cross-level Fusion and Propagation Module (CFPM). In the CFPM, the feature fusion part can effectively integrate the features from adjacent layers to exploit the cross-level correlations, and the feature propagation part can transmit valuable context information from the encoder to the decoder network via a gate unit. Finally, we formulate a unified and end-to-end trainable framework where cross-level features can be effectively fused and propagated for capturing rich context information. Extensive experiments on three benchmark camouflaged datasets demonstrate that our FAP-Net outperforms other state-of-the-art COD models. Moreover, our model can be extended to the polyp segmentation task, and the comparison results further validate the effectiveness of the proposed model in segmenting polyps. The source code and results will be released at https://github.com/taozh2017/FAPNet.
translated by 谷歌翻译
Vision transformers (ViTs) encoding an image as a sequence of patches bring new paradigms for semantic segmentation.We present an efficient framework of representation separation in local-patch level and global-region level for semantic segmentation with ViTs. It is targeted for the peculiar over-smoothness of ViTs in semantic segmentation, and therefore differs from current popular paradigms of context modeling and most existing related methods reinforcing the advantage of attention. We first deliver the decoupled two-pathway network in which another pathway enhances and passes down local-patch discrepancy complementary to global representations of transformers. We then propose the spatially adaptive separation module to obtain more separate deep representations and the discriminative cross-attention which yields more discriminative region representations through novel auxiliary supervisions. The proposed methods achieve some impressive results: 1) incorporated with large-scale plain ViTs, our methods achieve new state-of-the-art performances on five widely used benchmarks; 2) using masked pre-trained plain ViTs, we achieve 68.9% mIoU on Pascal Context, setting a new record; 3) pyramid ViTs integrated with the decoupled two-pathway network even surpass the well-designed high-resolution ViTs on Cityscapes; 4) the improved representations by our framework have favorable transferability in images with natural corruptions. The codes will be released publicly.
translated by 谷歌翻译
哥内克人Sentinel Imagery的纯粹卷的可用性为使用深度学习的大尺度创造了新的土地利用陆地覆盖(Lulc)映射的机会。虽然在这种大型数据集上培训是一个非琐碎的任务。在这项工作中,我们试验Lulc Image分类和基准不同最先进模型的Bigearthnet数据集,包括卷积神经网络,多层感知,视觉变压器,高效导通和宽残余网络(WRN)架构。我们的目标是利用分类准确性,培训时间和推理率。我们提出了一种基于用于网络深度,宽度和输入数据分辨率的WRNS复合缩放的高效导通的框架,以有效地训练和测试不同的模型设置。我们设计一种新颖的缩放WRN架构,增强了有效的通道注意力机制。我们提出的轻量级模型具有较小的培训参数,实现所有19个LULC类的平均F分类准确度达到4.5%,并且验证了我们使用的resnet50最先进的模型速度快两倍作为基线。我们提供超过50种培训的型号,以及我们在多个GPU节点上分布式培训的代码。
translated by 谷歌翻译
Semantic segmentation works on the computer vision algorithm for assigning each pixel of an image into a class. The task of semantic segmentation should be performed with both accuracy and efficiency. Most of the existing deep FCNs yield to heavy computations and these networks are very power hungry, unsuitable for real-time applications on portable devices. This project analyzes current semantic segmentation models to explore the feasibility of applying these models for emergency response during catastrophic events. We compare the performance of real-time semantic segmentation models with non-real-time counterparts constrained by aerial images under oppositional settings. Furthermore, we train several models on the Flood-Net dataset, containing UAV images captured after Hurricane Harvey, and benchmark their execution on special classes such as flooded buildings vs. non-flooded buildings or flooded roads vs. non-flooded roads. In this project, we developed a real-time UNet based model and deployed that network on Jetson AGX Xavier module.
translated by 谷歌翻译
多模态数据在遥感(RS)中变得容易获得,并且可以提供有关地球表面的互补信息。因此,多模态信息的有效融合对于卢比的各种应用是重要的,而且由于域差异,噪音和冗余,也是非常具有挑战性的。缺乏有效和可扩展的融合技术,用于遍布多种模式编码器和完全利用互补信息。为此,我们提出了一种基于新型金字塔注意融合(PAF)模块和门控融合单元(GFU)的多模态遥感数据的新型多模态网络(Multimodnet)。 PAF模块旨在有效地从每个模态中获得丰富的细粒度上下文表示,具有内置的交叉级别和巧克力关注融合机制,GFU模块利用了新颖的门控机制,用于早期合并特征,从而降低隐藏的冗余和噪音。这使得可以有效地提取补充方式来提取最迟到的特征融合的最有价值和互补的信息。两个代表性RS基准数据集的广泛实验证明了多模态土地覆盖分类的多模型的有效性,鲁棒性和优越性。
translated by 谷歌翻译
Change detection (CD) aims to find the difference between two images at different times and outputs a change map to represent whether the region has changed or not. To achieve a better result in generating the change map, many State-of-The-Art (SoTA) methods design a deep learning model that has a powerful discriminative ability. However, these methods still get lower performance because they ignore spatial information and scaling changes between objects, giving rise to blurry or wrong boundaries. In addition to these, they also neglect the interactive information of two different images. To alleviate these problems, we propose our network, the Scale and Relation-Aware Siamese Network (SARAS-Net) to deal with this issue. In this paper, three modules are proposed that include relation-aware, scale-aware, and cross-transformer to tackle the problem of scene change detection more effectively. To verify our model, we tested three public datasets, including LEVIR-CD, WHU-CD, and DSFIN, and obtained SoTA accuracy. Our code is available at https://github.com/f64051041/SARAS-Net.
translated by 谷歌翻译