Depth cues are known to be useful for visual perception. However, direct measurement of depth is often impracticable. Fortunately, though, modern learning-based methods offer promising depth maps by inference in the wild. In this work, we adapt such depth inference models for object segmentation using the objects' ``pop-out'' prior in 3D. The ``pop-out'' is a simple composition prior that assumes objects reside on the background surface. Such compositional prior allows us to reason about objects in the 3D space. More specifically, we adapt the inferred depth maps such that objects can be localized using only 3D information. Such separation, however, requires knowledge about contact surface which we learn using the weak supervision of the segmentation mask. Our intermediate representation of contact surface, and thereby reasoning about objects purely in 3D, allows us to better transfer the depth knowledge into semantics. The proposed adaptation method uses only the depth model without needing the source data used for training, making the learning process efficient and practical. Our experiments on eight datasets of two challenging tasks, namely camouflaged object detection and salient object detection, consistently demonstrate the benefit of our method in terms of both performance and generalizability.
translated by 谷歌翻译
How to identify and segment camouflaged objects from the background is challenging. Inspired by the multi-head self-attention in Transformers, we present a simple masked separable attention (MSA) for camouflaged object detection. We first separate the multi-head self-attention into three parts, which are responsible for distinguishing the camouflaged objects from the background using different mask strategies. Furthermore, we propose to capture high-resolution semantic representations progressively based on a simple top-down decoder with the proposed MSA to attain precise segmentation results. These structures plus a backbone encoder form a new model, dubbed CamoFormer. Extensive experiments show that CamoFormer surpasses all existing state-of-the-art methods on three widely-used camouflaged object detection benchmarks. There are on average around 5% relative improvements over previous methods in terms of S-measure and weighted F-measure.
translated by 谷歌翻译
我们提出Osformer,这是伪装实例分割(CIS)的第一个单阶段变压器框架。Osformer基于两个关键设计。首先,我们设计了一个位置传感变压器(LST),以通过引入位置引导查询和混合通风volvolution feedforward网络来获得位置标签和实例感知参数。其次,我们开发了一个粗到细节的融合(CFF),以合并LST编码器和CNN骨架的各种上下文信息。结合这两个组件使Osformer能够有效地融合本地特征和远程上下文依赖关系,以预测伪装的实例。与两阶段的框架相比,我们的OSFORMER达到41%的AP并达到良好的收敛效率,而无需大量的训练数据,即仅3040个以下的样本以下60个时代。代码链接:https://github.com/pjlallen/osformer。
translated by 谷歌翻译
在本文中,我们提出了一个新颖的端到端集团协作学习网络,称为GCONET+,该网络可以有效,有效地(250 fps)识别自然场景中的共呈含量对象。提出的GCONET+基于以下两个基本标准,通过采矿共识表示,实现了共同降低对象检测(COSOD)的新最新性能:1)组内紧凑型,以更好地提高共同空位之间的一致性通过使用我们的新颖组亲和力模块(GAM)捕获其固有共享属性的对象; 2)组间可分离性通过引入我们的新组协作模块(GCM)条件对不一致的共识进行调理,从而有效抑制嘈杂对象对输出的影响。为了进一步提高准确性,我们设计了一系列简单但有效的组件,如下所示:i)在语义级别促进模型学习的经常性辅助分类模块(RACM); ii)一个置信度增强模块(CEM)帮助模型提高最终预测的质量; iii)基于小组的对称三重态(GST)损失指导模型以学习更多的判别特征。对三个具有挑战性的基准测试(即可口可乐,COSOD3K和COSAL2015)进行了广泛的实验,这表明我们的GCONET+优于现有的12个尖端模型。代码已在https://github.com/zhengpeng7/gconet_plus上发布。
translated by 谷歌翻译
本文介绍了DGNET,这是一个新颖的深层框架,可利用对象梯度监督的伪装对象检测(COD)。它将任务分为两个连接的分支,即上下文和纹理编码器。必不可少的连接是梯度诱导的过渡,代表上下文和纹理特征之间的软组。从简单但高效的框架中受益,DGNET以很大的利润优于现有的最新COD模型。值得注意的是,我们的高效版本DGNET-S实时运行(80 fps),并获得与尖端模型JCSOD-CVPR $ _ {21} $相当的结果,只有6.82%的参数。应用程序结果还表明,所提出的DGNET在息肉分割,缺陷检测和透明对象分割任务中表现良好。代码将在https://github.com/gewelsji/dgnet上提供。
translated by 谷歌翻译
Preys in the wild evolve to be camouflaged to avoid being recognized by predators. In this way, camouflage acts as a key defence mechanism across species that is critical to survival. To detect and segment the whole scope of a camouflaged object, camouflaged object detection (COD) is introduced as a binary segmentation task, with the binary ground truth camouflage map indicating the exact regions of the camouflaged objects. In this paper, we revisit this task and argue that the binary segmentation setting fails to fully understand the concept of camouflage. We find that explicitly modeling the conspicuousness of camouflaged objects against their particular backgrounds can not only lead to a better understanding about camouflage, but also provide guidance to designing more sophisticated camouflage techniques. Furthermore, we observe that it is some specific parts of camouflaged objects that make them detectable by predators. With the above understanding about camouflaged objects, we present the first triple-task learning framework to simultaneously localize, segment, and rank camouflaged objects, indicating the conspicuousness level of camouflage. As no corresponding datasets exist for either the localization model or the ranking model, we generate localization maps with an eye tracker, which are then processed according to the instance level labels to generate our ranking-based training and testing dataset. We also contribute the largest COD testing set to comprehensively analyse performance of the COD models. Experimental results show that our triple-task learning framework achieves new state-of-the-art, leading to a more explainable COD network. Our code, data, and results are available at: \url{https://github.com/JingZhang617/COD-Rank-Localize-and-Segment}.
translated by 谷歌翻译
我们介绍了深度学习时代的首次全面视频息肉细分(VPS)研究。多年来,由于缺乏大规模细粒度分割注释,VPS的发展并没有轻松前进。为了解决此问题,我们首先引入了名为Sun-Seg的高质量逐帧注释数据集,其中包含来自著名的Sun-Database的158,690帧。我们提供具有不同类型的其他注释,即属性,对象掩码,边界,涂鸦和多边形。其次,我们设计了一个简单但有效的基线,称为PNS+,由全局编码器,局部编码器和归一化的自我注意(NS)块组成。全球和本地编码器会收到一个锚固框架和多个连续的帧,以提取长期和短期时空表示,然后由两个NS块逐渐更新。广泛的实验表明,PNS+实现了最佳性能和实时推理速度(170FPS),这使其成为VPS任务的有前途解决方案。第三,我们在Sun-Seg数据集中广泛评估13个代表性息肉/对象分割模型,并提供基于属性的比较。最后,我们讨论了几个开放问题,并为VPS社区提出了可能的研究指示。
translated by 谷歌翻译
我们提出了一项针对一项名为DiChotomous Image Segmentation(DIS)的新任务的系统研究,该任务旨在从自然图像中划分高度准确的对象。为此,我们收集了第一个称为DIS5K的大规模DIS​​数据集,其中包含5,470个高分辨率(例如2K,4K或4K或更大的图像,涵盖了遮盖,明显或细致的物体,在各种背景中。 DIS带有非常细粒的标签注释。此外,我们使用功能级和面具级别的模型培训指南介绍了一个简单的中间监督基线(IS-NET)。 IS-NET在拟议的DIS5K上的表现优于各种尖端基线,使其成为一个普遍的自学监督网络,可以促进未来的DIS研究。此外,我们设计了一个称为人类纠正工作(HCE)的新指标,该指标近似于纠正误报和假否定的鼠标点击操作的数量。 HCE用于测量模型和现实世界应用之间的差距,因此可以补充现有指标。最后,我们进行了最大规模的基准测试,评估了16个代表性分割模型,提供了有关对象复杂性的更深入的讨论,并显示了几种潜在的应用(例如,背景删除,艺术设计,3D重建)。希望这些努力能为学术和行业开辟有希望的方向。项目页面:https://xuebinqin.github.io/dis/index.html。
translated by 谷歌翻译
本文的目标是对面部素描合成(FSS)问题进行全面的研究。然而,由于获得了手绘草图数据集的高成本,因此缺乏完整的基准,用于评估过去十年的FSS算法的开发。因此,我们首先向FSS引入高质量的数据集,名为FS2K,其中包括2,104个图像素描对,跨越三种类型的草图样式,图像背景,照明条件,肤色和面部属性。 FS2K与以前的FSS数据集不同于难度,多样性和可扩展性,因此应促进FSS研究的进展。其次,我们通过调查139种古典方法,包括34个手工特征的面部素描合成方法,37个一般的神经式传输方法,43个深映像到图像翻译方法,以及35个图像 - 素描方法。此外,我们详细说明了现有的19个尖端模型的综合实验。第三,我们为FSS提供了一个简单的基准,名为FSGAN。只有两个直截了当的组件,即面部感知屏蔽和风格矢量扩展,FSGAN将超越所提出的FS2K数据集的所有先前最先进模型的性能,通过大边距。最后,我们在过去几年中汲取的经验教训,并指出了几个未解决的挑战。我们的开源代码可在https://github.com/dengpingfan/fsgan中获得。
translated by 谷歌翻译
现有的RGB-D显着性检测模型没有明确鼓励RGB和深度来实现有效的多模态学习。在本文中,我们通过互信息最小化介绍了一种新的多级级联学习框架,以“明确”模拟RGB图像和深度数据之间的多模态信息。具体地,我们首先将每个模式的特征映射到较低的维度特征向量,并采用互信息最小化作为常规器,以减少来自RGB的外观特征与来自深度的几何特征之间的冗余。然后,我们执行多级级联学习,在网络的每个阶段强加相互信息最小化约束。基准RGB-D显着数据集的广泛实验说明了我们框架的有效性。此外,为了繁荣发展该领域,我们贡献了最大(比NJU2K大7倍)数据集,其中包含具有高质量多边形/杂文/对象/ instance- / rank级注释的15,625图像对。基于这些丰富的标签,我们另外构建了具有强大基线的四个新基准,并观察了一些有趣的现象,可以激励未来的模型设计。源代码和数据集可在“https://github.com/jingzhang617/cascaded_rgbd_sod”中获得。
translated by 谷歌翻译