检测变压器最近显示出有希望的对象检测结果,并引起了越来越多的注意力。但是,如何开发有效的域适应技术来改善其跨域性能,尚不清楚和不清楚。在本文中,我们深入研究了这个主题,并从经验上发现,CNN骨架上的直接特征分布对齐仅带来有限的改进,因为它不能保证变压器中的域不变序列特征进行预测。为了解决这个问题,我们提出了一种新型的序列特征比对(SFA)方法,该方法是专门设计用于适应检测变压器的。从技术上讲,SFA由基于域查询的特征对齐(DQFA)模块和令牌特征对齐(TDA)模块组成。在DQFA中,一个新的域查询用于从两个域的令牌序列中汇总和对齐全局上下文。 DQFA分别在变压器编码器和解码器中部署时,降低了全局特征表示和对象关系中的域差异。同时,TDA在两个域中的序列中都对准令牌特征,从而分别降低了变压器编码器和解码器中局部和实例级特征表示中的域间隙。此外,提出了一种新型的两分匹配损失,以增强可鲁棒对象检测的特征可区分性。在三个具有挑战性的基准上进行的实验表明,SFA优于最先进的域自适应对象检测方法。代码已在以下网址提供:https://github.com/encounter1997/sfa。
translated by 谷歌翻译
最近,检测变压器(DETR)是一种端到端对象检测管道,已达到有希望的性能。但是,它需要大规模标记的数据,并遭受域移位,尤其是当目标域中没有标记的数据时。为了解决这个问题,我们根据平均教师框架MTTRANS提出了一个端到端的跨域检测变压器,该变压器可以通过伪标签充分利用对象检测训练中未标记的目标域数据和在域之间的传输知识中的传输知识。我们进一步提出了综合的多级特征对齐方式,以改善由平均教师框架生成的伪标签,利用跨尺度的自我注意事项机制在可变形的DETR中。图像和对象特征在本地,全局和实例级别与基于域查询的特征对齐(DQFA),基于BI级的基于图形的原型对齐(BGPA)和Wine-Wise图像特征对齐(TIFA)对齐。另一方面,未标记的目标域数据伪标记,可用于平均教师框架的对象检测训练,可以导致更好的特征提取和对齐。因此,可以根据变压器的架构对迭代和相互优化的平均教师框架和全面的多层次特征对齐。广泛的实验表明,我们提出的方法在三个领域适应方案中实现了最先进的性能,尤其是SIM10K到CityScapes方案的结果,从52.6地图提高到57.9地图。代码将发布。
translated by 谷歌翻译
DETR风格的检测器在内域场景中脱颖而出,但是它们在域移位设置中的属性却没有探索。本文旨在根据两个发现,在域移位设置上使用DETR式检测器建立一个简单但有效的基线。首先,减轻主链的域移动,解码器输出功能在获得有利的结果方面表现出色。对于另一种高级域对准方法,这两个部分都进一步增强了性能。因此,我们提出了对象感知的对准(OAA)模块和最佳基于运输的比对(OTA)模块,以在骨干和检测器的输出上实现全面的域对齐。 OAA模块将伪标签标识的前景区域对齐骨干输出中的伪标签,从而导致基于域的不变特征。 OTA模块利用切成薄片的Wasserstein距离来最大化位置信息的保留,同时最大程度地减少解码器输出中的域间隙。我们将调查结果和对齐模块实施到我们的适应方法中,并基准在域移位设置上基于DETR风格的检测器。在各种领域自适应场景上进行的实验验证了我们方法的有效性。
translated by 谷歌翻译
检测变压器已在富含样品的可可数据集上实现了竞争性能。但是,我们显示他们中的大多数人在小型数据集(例如CityScapes)上遭受了大量的性能下降。换句话说,检测变压器通常是渴望数据的。为了解决这个问题,我们通过逐步过渡从数据效率的RCNN变体到代表性的DETR,从经验中分析影响数据效率的因素。经验结果表明,来自本地图像区域的稀疏特征采样可容纳关键。基于此观察结果,我们通过简单地简单地交替如何在跨意义层构建键和价值序列,从而减少现有检测变压器的数据问题,并对原始模型进行最小的修改。此外,我们引入了一种简单而有效的标签增强方法,以提供更丰富的监督并提高数据效率。实验表明,我们的方法可以很容易地应用于不同的检测变压器,并在富含样品和样品的数据集上提高其性能。代码将在\ url {https://github.com/encounter1997/de-detrs}上公开提供。
translated by 谷歌翻译
无监督域适应(UDA)技术的最新进展在跨域计算机视觉任务中有巨大的成功,通过弥合域分布差距来增强数据驱动的深度学习架构的泛化能力。对于基于UDA的跨域对象检测方法,其中大多数通过对抗性学习策略引导域不变特征产生来缓解域偏差。然而,由于不稳定的对抗性培训过程,他们的域名鉴别器具有有限的分类能力。因此,它们引起的提取特征不能完全域不变,仍然包含域私有因素,使障碍物进一步缓解跨域差异。为了解决这个问题,我们设计一个域分离rcnn(DDF),以消除特定于检测任务学习的特定信息。我们的DDF方法促进了全局和本地阶段的功能解剖,分别具有全局三联脱离(GTD)模块和实例相似性解剖(ISD)模块。通过在四个基准UDA对象检测任务上表现出最先进的方法,对我们的DDF方法进行了宽阔的适用性。
translated by 谷歌翻译
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc., and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.
translated by 谷歌翻译
对象检测的域适应性(DAOD)最近由于其检测目标对象而没有任何注释而引起了很多关注。为了解决该问题,以前的作品着重于通过对抗训练在两阶段检测器中从部分级别(例如图像级,实例级,RPN级)提取的对齐功能。但是,对象检测管道中的个体级别相互密切相关,并且尚未考虑此层次之间的关系。为此,我们为DAOD介绍了一个新的框架,该框架具有三个提出的组件:多尺度意识不确定性注意力(MUA),可转移的区域建议网络(TRPN)和动态实例采样(DIS)。使用这些模块,我们试图在训练过程中减少负转移效应,同时最大化可传递性以及两个领域的可区分性。最后,我们的框架隐含地学习了域不变区域,以通过利用可转移信息并通过协作利用其域信息来增强不同检测级别之间的互补性。通过消融研究和实验,我们表明所提出的模块以协同方式有助于性能提高,以证明我们方法的有效性。此外,我们的模型在各种基准测试方面达到了新的最新性能。
translated by 谷歌翻译
Domain adaptive detection aims to improve the generalization of detectors on target domain. To reduce discrepancy in feature distributions between two domains, recent approaches achieve domain adaption through feature alignment in different granularities via adversarial learning. However, they neglect the relationship between multiple granularities and different features in alignment, degrading detection. Addressing this, we introduce a unified multi-granularity alignment (MGA)-based detection framework for domain-invariant feature learning. The key is to encode the dependencies across different granularities including pixel-, instance-, and category-levels simultaneously to align two domains. Specifically, based on pixel-level features, we first develop an omni-scale gated fusion (OSGF) module to aggregate discriminative representations of instances with scale-aware convolutions, leading to robust multi-scale detection. Besides, we introduce multi-granularity discriminators to identify where, either source or target domains, different granularities of samples come from. Note that, MGA not only leverages instance discriminability in different categories but also exploits category consistency between two domains for detection. Furthermore, we present an adaptive exponential moving average (AEMA) strategy that explores model assessments for model update to improve pseudo labels and alleviate local misalignment problem, boosting detection robustness. Extensive experiments on multiple domain adaption scenarios validate the superiority of MGA over other approaches on FCOS and Faster R-CNN detectors. Code will be released at https://github.com/tiankongzhang/MGA.
translated by 谷歌翻译
Domain adaptation aims to bridge the domain shifts between the source and the target domain. These shifts may span different dimensions such as fog, rainfall, etc. However, recent methods typically do not consider explicit prior knowledge about the domain shifts on a specific dimension, thus leading to less desired adaptation performance. In this paper, we study a practical setting called Specific Domain Adaptation (SDA) that aligns the source and target domains in a demanded-specific dimension. Within this setting, we observe the intra-domain gap induced by different domainness (i.e., numerical magnitudes of domain shifts in this dimension) is crucial when adapting to a specific domain. To address the problem, we propose a novel Self-Adversarial Disentangling (SAD) framework. In particular, given a specific dimension, we first enrich the source domain by introducing a domainness creator with providing additional supervisory signals. Guided by the created domainness, we design a self-adversarial regularizer and two loss functions to jointly disentangle the latent representations into domainness-specific and domainness-invariant features, thus mitigating the intra-domain gap. Our method can be easily taken as a plug-and-play framework and does not introduce any extra costs in the inference time. We achieve consistent improvements over state-of-the-art methods in both object detection and semantic segmentation.
translated by 谷歌翻译
在各种计算机视觉任务(例如对象检测,实例分段等)中,无监督的域适应至关重要。他们试图减少域偏差诱导的性能下降,同时还促进模型应用速度。域适应对象检测中的先前作品尝试使图像级和实例级别变化对准以最大程度地减少域差异,但是它们可能会使单级功能与图像级域适应中的混合级功能相结合,因为对象中的每个图像中的每个图像检测任务可能不止一个类和对象。为了通过单级对齐获得单级和混合级对齐方式,我们将功能的混合级视为新班级,并建议使用混合级$ h-divergence $,以供对象检测到实现均匀特征对准并减少负转移。然后,还提出了基于混合级$ h-Divergence $的语义一致性特征对齐模型(SCFAM)。为了改善单层和混合级的语义信息并完成语义分离,SCFAM模型提出了语义预测模型(SPM)和语义桥接组件(SBC)。然后根据SPM结果更改PIX域鉴别器损耗的重量,以减少样品不平衡。广泛使用的数据集上的广泛无监督域的适应实验说明了我们所提出的方法在域偏置设置中的强大对象检测。
translated by 谷歌翻译
通用域自适应对象检测(UNIDAOD)比域自适应对象检测(DAOD)更具挑战性,因为源域的标签空间可能与目标的标签空间不相同,并且在通用场景中的对象的比例可能会大大变化(即,类别转移和比例位移)。为此,我们提出了US-DAF,即使用多标签学习的US-DAF,即具有多个标记的rcnn自适应率更快,以减少训练期间的负转移效应,同时最大化可传递性以及在各种规模下两个领域的可区分性。具体而言,我们的方法由两个模块实现:1)我们通过设计滤波器机制模块来克服类别移动引起的负转移来促进普通类的特征对齐,并抑制私人类的干扰。 2)我们通过引入一个新的多标签尺度感知适配器来在对象检测中填充比例感知适应的空白,以在两个域的相应刻度之间执行单个对齐。实验表明,US-DAF在三种情况下(即开放式,部分集和封闭设置)实现最新结果,并在基准数据集clipart1k和水彩方面的相对改善中获得7.1%和5.9%的相对改善。特定。
translated by 谷歌翻译
对象检测网络已经达到了令人印象深刻的性能水平,但是在特定应用程序中缺乏合适的数据通常会限制在实践中。通常,使用其他数据源来支持培训任务。但是,在这些中,不同数据源之间的域间隙在深度学习中构成了挑战。基于GAN的图像到图像样式转移通常用于缩小域间隙,但不稳定并与对象检测任务脱钩。我们提出了Awada,这是一个注意力加权的对抗域适应框架,用于在样式变换和检测任务之间创建反馈循环。通过从对象探测器建议中构造前景对象注意图,我们将转换集中在前景对象区域并稳定样式转移训练。在广泛的实验和消融研究中,我们表明AWADA在常用的基准中达到了最新的无监督域适应对象检测性能,用于诸如合成,不利的天气和跨摄像机适应性。
translated by 谷歌翻译
虽然用变压器(DETR)的检测越来越受欢迎,但其全球注意力建模需要极其长的培训期,以优化和实现有前途的检测性能。现有研究的替代方案主要开发先进的特征或嵌入设计来解决培训问题,指出,基于地区的兴趣区域(ROI)的检测细化可以很容易地帮助减轻DETR方法培训的难度。基于此,我们在本文中介绍了一种新型的经常性闪闪发光的解码器(Rego)。特别是,REGO采用多级复发处理结构,以帮助更准确地逐渐关注前景物体。在每个处理阶段,从ROI的闪烁特征提取视觉特征,其中来自上阶段的检测结果的放大边界框区域。然后,引入了基于一瞥的解码器,以提供基于前一级的瞥见特征和注意力建模输出的精细检测结果。在实践中,Refo可以很容易地嵌入代表性的DETR变体,同时保持其完全端到端的训练和推理管道。特别地,Refo帮助可变形的DETR在MSCOCO数据集上实现44.8AP,只有36个训练时期,与需要500和50时期的第一DETR和可变形的DETR相比,分别可以分别实现相当的性能。实验还表明,Rego始终如一地提升不同DETR探测器的性能高达7%的相对增益,在相同的50次训练时期。代码可通过https://github.com/zhechen/deformable-detr-rego获得。
translated by 谷歌翻译
Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need complicated post-processing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (2 %-4 % mAP) on the ImageNet VID dataset. TransVOD yields comparable performances on the benchmark of ImageNet VID. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0 % mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7 % mAP while running at around 30 FPS on a single V100 GPU device. Code and models will be available for further research.
translated by 谷歌翻译
DETR是使用变压器编码器 - 解码器架构的第一端到端对象检测器,并在高分辨率特征映射上展示竞争性能但低计算效率。随后的工作变形Detr,通过更换可变形的关注来提高DEDR的效率,这实现了10倍的收敛性和改进的性能。可变形DETR使用多尺度特征来改善性能,但是,与DETR相比,编码器令牌的数量增加了20倍,编码器注意的计算成本仍然是瓶颈。在我们的初步实验中,我们观察到,即使只更新了编码器令牌的一部分,检测性能也几乎没有恶化。灵感来自该观察,我们提出了稀疏的DETR,其仅选择性更新预期的解码器预期的令牌,从而有效地检测模型。此外,我们表明在编码器中的所选令牌上应用辅助检测丢失可以提高性能,同时最小化计算开销。即使在Coco数据集上只有10%的编码器令牌,我们验证稀疏DETR也可以比可变形DETR实现更好的性能。尽管只有编码器令牌稀疏,但总计算成本减少了38%,与可变形的Detr相比,每秒帧(FPS)增加42%。代码可在https://github.com/kakaobrain/sparse-dett
translated by 谷歌翻译
Vision-Centric Bird-Eye-View (BEV) perception has shown promising potential and attracted increasing attention in autonomous driving. Recent works mainly focus on improving efficiency or accuracy but neglect the domain shift problem, resulting in severe degradation of transfer performance. With extensive observations, we figure out the significant domain gaps existing in the scene, weather, and day-night changing scenarios and make the first attempt to solve the domain adaption problem for multi-view 3D object detection. Since BEV perception approaches are usually complicated and contain several components, the domain shift accumulation on multi-latent spaces makes BEV domain adaptation challenging. In this paper, we propose a novel Multi-level Multi-space Alignment Teacher-Student ($M^{2}ATS$) framework to ease the domain shift accumulation, which consists of a Depth-Aware Teacher (DAT) and a Multi-space Feature Aligned (MFA) student model. Specifically, DAT model adopts uncertainty guidance to sample reliable depth information in target domain. After constructing domain-invariant BEV perception, it then transfers pixel and instance-level knowledge to student model. To further alleviate the domain shift at the global level, MFA student model is introduced to align task-relevant multi-space features of two domains. To verify the effectiveness of $M^{2}ATS$, we conduct BEV 3D object detection experiments on four cross domain scenarios and achieve state-of-the-art performance (e.g., +12.6% NDS and +9.1% mAP on Day-Night). Code and dataset will be released.
translated by 谷歌翻译
多模式变压器表现出高容量和灵活性,可将图像和文本对齐以进行视觉接地。然而,由于自我发挥操作的二次时间复杂性,仅编码的接地框架(例如,transvg)遭受了沉重的计算。为了解决这个问题,我们通过将整个接地过程解散为编码和解码阶段,提出了一种新的多模式变压器体系结构,以动态MDETR形成。关键观察是,图像中存在很高的空间冗余。因此,我们通过在加快视觉接地过程之前利用这种稀疏性来设计一种新的动态多模式变压器解码器。具体而言,我们的动态解码器由2D自适应采样模块和文本引导的解码模块组成。采样模块旨在通过预测参考点的偏移来选择这些信息补丁,而解码模块则可以通过在图像功能和文本功能之间执行交叉注意来提取接地对象信息。这两个模块也被堆叠起来,以逐渐弥合模态间隙,并迭代地完善接地对象的参考点,最终实现了视觉接地的目的。对五个基准测试的广泛实验表明,我们提出的动态MDETR实现了计算和准确性之间的竞争权衡。值得注意的是,在解码器中仅使用9%的特征点,我们可以降低〜44%的多模式变压器的GLOP,但仍然比仅编码器的对应物更高的精度。此外,为了验证其概括能力并扩展我们的动态MDETR,我们构建了第一个单级剪辑授权的视觉接地框架,并在这些基准测试中实现最先进的性能。
translated by 谷歌翻译
We propose an approach for unsupervised adaptation of object detectors from label-rich to label-poor domains which can significantly reduce annotation costs associated with detection. Recently, approaches that align distributions of source and target images using an adversarial loss have been proven effective for adapting object classifiers. However, for object detection, fully matching the entire distributions of source and target images to each other at the global image level may fail, as domains could have distinct scene layouts and different combinations of objects. On the other hand, strong matching of local features such as texture and color makes sense, as it does not change category level semantics. This motivates us to propose a novel method for detector adaptation based on strong local alignment and weak global alignment. Our key contribution is the weak alignment model, which focuses the adversarial alignment loss on images that are globally similar and puts less emphasis on aligning images that are globally dissimilar. Additionally, we design the strong domain alignment model to only look at local receptive fields of the feature map. We empirically verify the effectiveness of our method on four datasets comprising both large and small domain shifts. Our code is available at https://github.com/ VisionLearningGroup/DA_Detection.
translated by 谷歌翻译
在本文中,我们介绍了全景语义细分,该分段以整体方式提供了对周围环境的全景和密集的像素的理解。由于两个关键的挑战,全景分割尚未探索:(1)全景上的图像扭曲和对象变形; (2)缺乏培训全景分段的注释。为了解决这些问题,我们提出了一个用于全景语义细分(Trans4Pass)体系结构的变压器。首先,为了增强失真意识,Trans4Pass配备了可变形的贴片嵌入(DPE)和可变形的MLP(DMLP)模块,能够在适应之前(适应之前或之后)和任何地方(浅层或深度级别的(浅层或深度))和图像变形(通过任何涉及(浅层或深层))和图像变形(通过任何地方)和图像变形设计。我们进一步介绍了升级后的Trans4Pass+模型,其中包含具有平行令牌混合的DMLPV2,以提高建模歧视性线索的灵活性和概括性。其次,我们提出了一种无监督域适应性的相互典型适应(MPA)策略。第三,除了针孔到型 - 帕诺amic(PIN2PAN)适应外,我们还创建了一个新的数据集(Synpass),其中具有9,080个全景图像,以探索360 {\ deg} Imagery中的合成对真实(Syn2real)适应方案。进行了广泛的实验,这些实验涵盖室内和室外场景,并且使用PIN2PAN和SYN2REAL方案进行了研究。 Trans4Pass+在四个域自适应的全景语义分割基准上实现最先进的性能。代码可从https://github.com/jamycheung/trans4pass获得。
translated by 谷歌翻译
Domain adaptive object detection (DAOD) aims to alleviate transfer performance degradation caused by the cross-domain discrepancy. However, most existing DAOD methods are dominated by computationally intensive two-stage detectors, which are not the first choice for industrial applications. In this paper, we propose a novel semi-supervised domain adaptive YOLO (SSDA-YOLO) based method to improve cross-domain detection performance by integrating the compact one-stage detector YOLOv5 with domain adaptation. Specifically, we adapt the knowledge distillation framework with the Mean Teacher model to assist the student model in obtaining instance-level features of the unlabeled target domain. We also utilize the scene style transfer to cross-generate pseudo images in different domains for remedying image-level differences. In addition, an intuitive consistency loss is proposed to further align cross-domain predictions. We evaluate our proposed SSDA-YOLO on public benchmarks including PascalVOC, Clipart1k, Cityscapes, and Foggy Cityscapes. Moreover, to verify its generalization, we conduct experiments on yawning detection datasets collected from various classrooms. The results show considerable improvements of our method in these DAOD tasks. Our code is available on \url{https://github.com/hnuzhy/SSDA-YOLO}.
translated by 谷歌翻译